Below the Fold: Inequality in a Predatory World

We live in a predatory world. The poor, the helpless, or simply the less well off find themselves dehumanized and victimized around the world. They are often defenseless against the degradation and violence visited upon them by the better off, or by the states the better off control. Without economic equality, human well being, a life rich in the possibilities of self-fulfillment, is impossible. Without economic equality, any gains in achieving full citizenship including racial, gender and political equality are unsustainable.

Indeed, quite the opposite occurs routinely. Disadvantage awakens in the advantaged a desire for gain at the expense of others, even a desire for conquest over others less powerful. The English philosopher John Hobbes argued that when people found themselves in a state of equality, their gnawing fear of losing their status would transform their society into a war of all against all. The world’s rich are showing that Hobbes, if anything, under-estimated the power of circumstances. Even overwhelming economic superiority does not quiet the fear of losing. As the saying goes, you can never be too rich, but for reasons the society wags never fathomed. For those who have it all, they never have enough. Instead it quickens their desire for more. It also arouses in them a need to dominate and degrade the disadvantaged masses beneath them. They enact their sovereignty by violating the dispossessed. The rich become what Hobbes believed the sovereign must become — a monstrous Leviathan capable of instilling shock, awe, and death, this time among the world’s poor.

Perhaps only the cynical Manicheans trying to run the world from the White House understand this need for the Leviathan. The endless desire for more wealth, the fear of the poor from whom the wealth is extracted, and the need to make the masses stand in fear suggest a reason for our period’s particular cruelty. Endless wars, mass annihilations, horrific tortures, barbaric incarcerations, and above all a policy of lawlessness are Leviathan’s means. Its works produce grisly as well as material satisfactions for the rich and a ghastly theater of violence and subjection for the rest.

I argue that the more economically unequal as a world we become, the more an inferno our lives will become. Liberal intellectuals and policy makers, or perhaps one should say the rest of the world ruling elite, seem inured to the relationship between growing inequality and growing inhumanity. Instead of demanding economic equality, they focus on poverty reduction, hoping that reducing poverty will make a dent in economic inequality. Perhaps cynically too, they hope that modest improvements in living standards will dampen popular resistance to the rule of the rich, to which they, though less than the rich themselves, are acclimated.

“Let us abandon the fight against inequality,” writes Foreign Policy editor Moises Naim in a recent Financial Times op-ed. “Let us stop fighting a battle we cannot win and concentrate all efforts on a fight that can succeed. The best tools to achieve a long-term, sustained decline in inequality are the same as those that are now widely accepted as the best available levers to lift people out of poverty.” By fighting poverty through health, education, jobs and housing, Naim argues, we will wear inequality down.

Naim expresses, albeit from the liberal side, the consensus view of the rich-country development community, the World Bank, and an international effort such as the UN Millennium Project. Poverty reduction is the goal because it is achievable, and it is saleable as a strategy precisely because poverty reduction does not call for a redistribution of world resources. Thus, liberals, either naïve or too mindful of the Leviathan, content themselves with lifting up the abject. They either do not countenance or reject outright liberating the dispossessed from subjection.

The trouble with the liberal position, though very different from Manichean murder and terror, is that it is rather wishful, and it ignores rather well established facts. Eliminating poverty does not achieve equality, and it doesn’t take a Nobel-winning economist to show it. The United States hit its lowest historical level of economic inequality in 1968, a time of great prosperity and government intervention to eliminate poverty. The level we reached then was equivalent to the economic inequality we would find in many poor countries today, which is to say a pretty abysmal level. Note too that the good times of the Clinton era and the recent recovery during the Bush regime have not stopped economic inequality from growing. In fact, inequality in America has been accelerating, not slowing.

Economic growth alone does not eliminate poverty. Many economists forecast that it will take China, even at its remarkable rate of economic growth, almost 30 years to eliminate dire poverty, leaving a massive job of lifting another up to half a billion people out of three to four dollar a day poverty. Perhaps cognizant of this, the Chinese state is taking dramatic steps to redistribute income to the rural peasantry, eliminating land taxes, providing free public education, and rebuilding a rural health system. Yet, even as Chinese poverty will prove a difficult problem to solve, a middle class will be living at the level of the today’s Korean middle class, and the great wave of capitalist development will have created a massive new generation of the truly, world-level wealthy. Inequality will get worse, and one can only wish good luck to the Chinese peasants.

The first lesson here is that economic growth creates the wealthy first, and brings along the masses later – far later than the time necessary to earn their way to equality through labor or enterprise. It happens inside countries like our own. It happens across countries. Consider evidence accumulated by World Bank economist Branko Milanovic that the ratio of inequality, rich country to poor country, has grown from 19 to 1 in 1960 to 37 to 1 in 2000. This is true despite the spread of industrialization, thought to be the holy grail of development, and rising income levels in Asia.

The second lesson is that if you don’t go after economic equality, and settle instead for poverty reduction, there is little prospect that the disadvantaged can hold on to their gains given the predations of the rich. Again, the US is a paradigm case. Even as the rich have gotten richer over the past quarter century, the American state has actually contrived to take back a variety of welfare benefits from the poor. As America’s medium family income has stagnated since the seventies, the poor have become objectively poorer. The state has ignored these facts and refused increases in life support consisting of income supplements, housing assistance, health care, education, and food assistance.

The only solution that will work, whether at the national or the international level, is redistribution of the wealth. The rich must be made poorer and the poorer their equals, if the goal is a modicum of well being for all.

We know how to do this at the national level, and again the evidence for its success is widely known. Taxes work. Not only did they increase equality in America starting with World War I and beginning again during the New Deal, but inequality increased as taxation radically declined starting with the Reagan Administration in 1981.

At the international level, how to proceed is less certain, given that no international body possesses the means to compel peoples via their states to contribute tax monies to the common good of all. The amounts necessary to raise are not hard to calculate. We are masters of calculation in this age. Currently, rich countries cannot even come up with 1% of their Gross Domestic Product in transfer payments to poor countries, a figure once considered the minimum moral response to global destitution. Despite six years of posturing about supporting the UN Millennium initiative to eliminate much of less than a dollar-a-day poverty worldwide, rich country support is declining rather than increasing. It is important to put redistribution at the top of the global agenda rather than engage in the bait and switch of poverty reduction.

Economic equality requires an obviously enormous and lasting redistribution of wealth worldwide. Yet someone once calculated that there is US$5000 in wealth for every person on the planet, the equivalent of the Gross Domestic Product of Uruguay. Imagine the world as a big Uruguay. Things could be worse: people in Uruguay live as long as Americans do, their child mortality rate is even with ours, and less than 4% of their children suffer malnutrition.

The beaches are beautiful, Montevideo is a dream, and no one expects an Uruguayan invasion of Iran any time soon.



Evolutionary pressures banish unfit biological species into extinction. The American descendents of Homo sapiens species will explode into extinction at the midriff. A walk through Main Street, USA will convince any skeptic of the veracity of this prediction. And it will all happen due to the adipose state of the nation.

Fact: 65% of US population is either overweight or obese.
Fact: The number of obese Americans zoomed from 14.5% in 1976 to 30.5% in 2000.

Millions of Americans are obese, diabetic, hypertensive, hyperlipidemic and succumb to this murderous metabolic syndrome. Strokes, heart attacks, fatty liver, osteoporosis, cancer, depression, arthritis and sleep apnea ravage the obese. The chart below, reproduced from Baylor College of medicine, depicts all havoc unleashed by obesity:

Screenhunter_8_2

We have an epidemic. We spend $117 billion directly or indirectly on obesity and its complications; we eat more, exercise less and our bodies have become a battleground of conflicting hormones and peptides.

We thought our loads of fat were meant only for aesthetic shame but in 1994 scientists told us that the adipose tissue is an endocrine organ! Yes, an endocrine organ, similar to thyroid and adrenal gland. Like them it secrets into blood, a hormone — in this case, Leptin — which travels to remotely located hypothalamus and suppresses appetite.

In reverse, lack of Leptin stimulates appetite, encourages over eating, thus increasing fat storage. (This probably rendered an evolutionary advantage to help store a reservoir of fat for lean days of starvation.) Leptin deficient mice due to gene depletion (ob-ob mice) are obese and leptin replacement cures their obesity. Leptin gene deficiency and obesity is rare in humans and improves with leptin therapy.

Corollary: if leptin were administered to obese people they should loose weight. So the investigators tried it but only with partial success. It so happens that obese people have high – not low – levels of leptin. Their cells lack the receptors for leptin to attach and are resistant to leptin therapy. Thus, obese are either leptin deficient or leptin receptor deficient.

Leptin is not the only attention grabber; ghrelin entered the stage in 1999. Stomach secrets grehlin in response to hunger; a hungry man has high grehlin. The circulating grehlin stimulates the satiety center in the hypothalamus and grehlin secretion stops. A satiated man has low grehlin. Now add to this complexity insulin, cholecystokinin and GLP-1 Low insulin levels stimulate hunger and initiate the act of eating. The fat and probably protein in the meal stimulates cholecystokinin secretion from the upper small bowel which suppresses appetite and slows gastric emptying causing fullness and satiation. The act of eating stops. GLP-1 oozes out of the lower small bowel to suppress the appetite further.

But that is not all; in science it gets complex before it get simple. See the diagram below reproduced from adipose society of Baylor College of medicine:

Screenhunter_9_1

Adiposity is regulated by a set of short and long term signals. Those for the short term determine the size of a meal and it frequency; the long term signals determine the fat storage.

The mechanism of appetite regulation and fat deposit is an interaction of competing and feed back signals. The following are some of the mediators:

  1. Neurotransmitters in the hypothalamus, like NPY, AGR,5HT
  2. Gut hormones like leptin , grehlin , cholecystokinin and GLP1
  3. Other circulating hormones like cortisol and thyroxine
  4. Sensory input from stomach and intestines
  5. External input like smell, taste and emotions

Currently the US obesity hormones are in state of misalignment: the USA is a leptin resistant, ghrelin deficient, cholecystokinin inefficient and insulin abundant nation.

And we still don’t know which molecule is the master conductor of this orchestra and how to transform this cacophony into harmony. It is obvious that the mechanism of appetite regulation and fat deposition is complex which leads to general failure of any single mode of therapy. Unrealistic individual goals of weight reduction further thwart the success. The therapy of obesity must include a combination of the following:

  1. Eat less: A daily deficit of 500 to 1000 calories is reasonable. This is the single most important component of therapy and most difficult to adhere to.
  2. Exercise a lot: Strenuous aerobic activity for over 200 minutes per week maintained over a long period of time with calorie restriction is effective. Physical activity conserves fat free mass, improves glucose tolerance and lipid profile. Fact: Moderate exercise like walking 45 minutes a day for 5 days a week has minimal effect on weight loss.
  3. Modify behavior to avoid temptation to engorge on food. This warrants life style change and altering emotional response to food. Self monitoring and social support are essential.
  4. Use drug therapy: Only two drugs have been approved by FDA for long term therapy.
    • Sibutaramine causes anorexia by blocking neuronal monoamine uptake.
    • Orlistat decreases fat absorption
  5. Get surgery if morbidly obese and nothing else helps.
    • Gastric bypass to channel food directly into mid intestine thus decreasing absorption
    • Gastric banding and stapling to diminish the size of the stomach
    • Combination of bypass and stomach size reduction.

Fact: Even moderate weight loss of 5% decreases the complications significantly

The prescription of eat- less-exercise- more-modify- behavior is still the best choice but compliance has been pathetic. On average, a person on a weight reduction diet has tried and failed three to six other diets before. This failure has created an enormous market opportunity for fad diet authors and manufacturers. Some examples:

  1. Eat less carbohydrates ( Atkins, South beach)
  2. Eat less fats ( Ornish, Pritikin)
  3. Eat less of both ( Weight Watchers, Jennie Craig)
  4. Eat very low calorie diet:400 calories ( Optifast, Cambridge)

The failure has also challenged the scientists to discover new therapies and many new drugs are in the various stages of development. One exciting possibility is the recent understanding of the endocannabinoid (endogenous cannabis like molecules) system. When investigators were working to understand the molecular action of Cannabis Sativa they found cannabinoid receptors (CB1) in the central nervous system and in the adipose tissues. Stimulation of CB1 in the brain increases appetite and stimulation in the fat cells increases fat deposition. It seems this system is in perpetual overactive drive in the obese and blockage of the receptors decreases appetite and promotes weight loss. Rimonabant, a drug now in clinical trials blocks the CB1 receptors and will be an exciting new weapon to combat obesity.

Fat20cat_2Many other drugs are under development but only an accurate understanding of the mechanism of obesity will lead to a better therapy. Science travels from metaphorical to mathematical; the journey is both exciting and agonizing. The investigation meanders, looses way, finds it again, and races to the next stop, falters, sprints and trundles along with hope towards exhilarating simplicity and elegance. The investigation of obesity is scurrying through the difficult middle stretch at present. We better arrive soon or the speed of decline of the American civilization will be directly proportional to the rate of expansion of its girth.

The sobering fact is:

I think and breathe and live because I eat
I eat therefore I am
But soon I will not be,
Because I ate.

Random Walks: Narnia, Schmarnia

[Author’s Note: Some of you may have received an earlier, unfinished version of this particular column. It was not, as one reader suggested, an avant-garde literary choice — Behold! The Half-Finished Post! — but a sad case of an inexperienced blogger accidentally hitting “Publish Now” when she really meant to save it in “Draft” mode. Really, it’s a miracle she is allowed to blog at all. But she promises to never do it again.]

C.S. Lewis’ Chronicles of Narnia have long enjoyed enormous popularity among readers of all ages, particularly among those with Christian leanings. That’s not surprising, since Lewis was himself an avowed Christian and made no bones about the fact that the series was intended as a reworking of the traditional Christian “myth” (and I use that term in the literary sense). But it’s not obvious to everyone, as I discovered when a friend of mine recently went to see the much-anticipated film version of The Lion, the Witch and the Wardrobe. A staunch agnostic, she was horrified to find that somehow, in the translation to the silver screen, the subtleties of Lewis’ mythical retelling were lost, leading to what she considered to be little more than a ham-fisted, didactic advertisement for the Christian religion.

My friend is not alone in her objections to the film (I share them) — indeed, it is a common refrain when discussing Lewis’ literary output. There are many people who view Lewis with suspicion, precisely because he has been so warmly embraced by evangelical Christians. And in the case of bestselling children’s author Philip Pullman, author of the His Dark Materials trilogy (a wonderful read in its own right), suspicion gives way to outright hostility. Pullman is among Lewis’ most outspoken critics, clearly evidenced by a 1998 article in The Guardian, in which he dismisses the Narnia books as “one of the most ugly and poisonous things I’ve ever read.” More recently, he dismissed his rival’s work as being “blatantly racist,” “monumentally disparaging of women,” and blatant Christian propaganda in remarks at the 2002 Guardian Hay festival. (Pullman in turn has been unjustly attacked by right-wing naysayers as “the most dangerous author in Britain” and “semi-Satanic”; he is, in many respects, the anti-Lewis.)

Pullman has made some valid points in his public comments about Lewis and the Narnia chronicles. In addition to his avowed Christianity, Lewis was a conservative product of his era, with all its recumbent prejudices. And he was not, by any means, “nice,” possessing a flinty,  intellectually stringent, sometimes slightly bullying disposition that didn’t always win friends and influence people. Lewis did not suffer fools gladly, if at all. I doubt many of the evangelical Christians who deify Lewis today would have much cared for him in person, and vice versa. Yet he was hardly evil incarnate. I am not a diehard fan of Lewis’ work, but I will be so bold as to suggest that the truth lies somewhere in between the two extremes of beloved saint and recalcitrant sinner. Lewis was a man, plain and simple, with all the usual strengths and foibles.Cslsmoking

As for the charge of Lewis’ work being blatant Christian propaganda, Pullman somewhat over-states the case. Certainly Lewis deliberately evoked the themes and symbols of the Christian mythology in much of his writing, but so did many of the greatest writers in Western literature: Dante, Milton, and Donne, to name just a few. The problem lies not with the choice of themes, but with Lewis’ decidedly heavy-handed style. In his hands, the subtle symbolism of myth more often than not devolved into  overly-simplistic allegory — a far less satisfying approach, artistically.

Lewis certainly understood the power of myth. He’d been fascinated with mythology since his childhood, particularly the Norse myths, and within those, relished the story of Balder the Beautiful, struck down by an errant arrow as a result of the meddlesome Loki. Balder is the Christ figure of the North. Norse mythology was an enthusiasm Lewis shared with J.R.R. Tolkien when the two men met at Oxford in the 1930s. (If nothing else, we may owe The Lord of the Rings trilogy in part to Lewis, who was the first to read early drafts of Tolkien’s imagined world and who encouraged his friend. Tolkien himself later credited Lewis with “an unpayable debt” for convincing him the “stuff” could be more than a “private hobby.”) Along with several other Oxford-based writers and scholars, they began meeting regularly at a local pub called The Eagle and Child, fondly dubbed The Bird and the Baby.

The Oxford Inklings, as they came to be called, were arguably the literary mythmakers of the mid-20th century, at least in England. In addition to Lewis and Tolkien, the group included the lesser-known Charles Williams, who penned fantastical tales in which, for example, the symbolism of the Major Arcana in the traditional tarot deck becomes manifest (The Greater Trumps), while the Earth is invaded not by aliens from outer space, but by the Platonic Ideal Forms (The Place of the Lion). The Platonic Lion featured in the latter may have influenced Lewis’ choice of that animal to represent his Narnia Christ figure, Aslan.

Ironically, it was Lewis’ love of myth that eventually led to his conversion. He was a notoriously prickly atheist for much of his early academic career; in fact, he was as dogmatic about his atheism as he was later about his Christian beliefs, so if nothing else, the man was consistent in his character.  He was also rigorously trained in logic, thanks to an early tutor, W.T. Kirkpatrick. An anecdote related in Humphrey Carpenter’s book, The Inklings, tells of Lewis’ first meeting with Kirkpatrick. Disembarking onto the train platform in Surrey, England, Lewis sought to make small talk by remarking that the countryside was more wild than he’d expected. Kirkpatrick pounced on this innocuous observation and led his new student through a barrage of questions and challenges to his assumptions, concluding, “Do you not see that your remark was meaningless?”

As Carpenter writes, the young Lewis thereby “learned to phrase all remarks as logical propositions, and to defend his opinions by argument.” Among the irrational concepts Lewis rejected was belief in God, or any religion, writing to his Belfast friend Arthur Greeves, “I believe in no religion. There is absolutely no proof for any of them, and from a philosophical standpoint Christianity is not even the best. All religions, that is, all mythologies… are merely man’s own invention.” For Lewis, Christianity was merely “one mythology among many.”

Personally, I’m inclined to agree with the young Lewis on that point (although I, too, have an affinity for myths both ancient and modern); it’s a shame he lost that rigorous clarity later on. I disagree with his early rejection of the thrill of imagination; he insisted it must be kept “strictly separate from the rational.” So what changed? That’s not entirely clear. Over a period of several years, Lewis learned to embrace his childhood love of myth and story, particularly the emotional sensation he called “Joy,” which would come to symbolize, for him, the divine, in the form of the Christian god.  Through long discussions with Tolkien and another Oxford colleague, Owen Barfield (ironically, a fellow atheist, albeit one who propounded the story-telling power of myth), he changed his tune. Tolkien in particular played a role, convincing him that the Christ story was the “true” version of the age-old “dying god” motif in mythology — familiar to anyone who has read Joseph Campbell’s compelling The Voyage of the Hero — but unlike, say, the story of Balder, Tolkein maintained that the Christ myth brought with it “a precise location in history and definite historical consequences.” It was myth become fact, yet still “retaining the character of myth,” as Carpenter tells it.

My problem is not with Lewis’ acceptance of the view that Christianity is rooted in the ancient “dying god” mythology; that should be patently obvious to lovers of story and myth. But it takes a certain special kind of arrogance to assume that, out of all the versions of this prevailing myth that have been told throughout the ages, the one of Jesus is the only “true” one. Lewis was too rigorous a logician not to realize this, and correctly concluded the point was logically unprovable. At some point, he chose to ignore his lingering misgivings and make a leap of faith. That is why they call it faith, after all. Lewis knew his Dante; he recognized that cold hard logic (personified in The Divine Comedy by the poet Virgil) could only lead him to Purgatory, not Paradise. But he hadn’t yet found his Beatrice.  He took that leap of faith anyway, which might be why he became so dogmatic about his adopted religion: he knew he was on logically shaky ground, just as his earlier atheistic foundation was shaken by his love for myth and the experience of “Joy.”

However enriching Lewis may have found his faith personally, I (and many others) would argue that his writing suffered for it. He was hardly a slouch in the writing department, but he lacked the subtlety and complexity of his friends Tolkien and Williams. His innate Christian bias seeped into everything he produced. Since he was a medievalist, this was less of a problem for his scholarly criticism, because the great works from that period in literary history are firmly rooted in the Christian tradition. But the didacticism hurt his fiction. Even Tolkien, a fellow believer, found the Narnia chronicles distasteful in their cavalier, overly-literal approach to mythology, announcing, “It simply won’t do, you know!”

Nonetheless, there are bright spots. Lewis’ science fiction trilogy (Out of the Silent Planet, Perelandra and That Hideous Strength) owes as much to the conventions of medieval literature as it does to his Christian faith. And for those able to look beyond the overtly Christian trappings of The Screwtape Letters, they may find a highly intelligent, perceptive, and mercilessly satirical exposition of human frailty. One can also see shades of Milton’s Paradise Lost in Screwtape’s insistence that Hell’s demons fight with an unfair disadvantage: since all creation is “good,” by virtue of emanating from God — a.k.a., “the Enemy” — everything “must be twisted before it is of any use to us.”

One of my favorite passages in these fictional letters from a senior demon to his nephew, a junior tempter, concerned the sin dubbed “gluttony of delicacy,” or the “All I want…” phenomenon. For instance, the target’s mother has an irritating habit of refusing anything offered to her, for a simple piece of toast and weak tea, rationalizing her finicky behavior with the reassurance that her wishes are quite modest, “smaller and less costly than what has been set before her.” In reality, it cleverly disguises “her determination to get what she wants, however troublesome it may be to others.”

That particular insight — like many of those contained in the book — is just as apt today, with our modern obsession with fad diets. More and more restaurants are tailoring menu items to meet the needs of their customers, whether they’re watching their carbs, cutting down on fat, avoiding meat and dairy, or choosing to subsist entirely on dry toast and weak tea. Starbucks’ entire rationale seems to be affording its customers the ability to order their caffeinated beverage to the most precise specifications. (In that respect, I’m as guilty as the next person. You’ll pry my grande soy chai tea latte from my cold dead fingers before you’ll get me to go back to drinking Folger’s instant coffee or that standard-issue Lipton orange pekoe tea bag. At least offer me the option of selecting a nice darjeeling or Earl Grey blend from Twinings or something. Gluttony of delicacy, indeed.)

But I digress. For all my distaste for Lewis’ Christian didacticism, I forgive all on the merits of just one book: the unjustly ignored novel, Till We Have Faces. It is a mythical retelling of Cupid and Psyche, told from the perspective of the ugly elder sister, Orual, who eventually becomes queen of Lewis’ fictional realm. Despite her role in bringing about her sister’s downfall, Orual is a good queen, and a sympathetic character. But the book ends with a shattering moment of painful self-awareness, when the dying Orual — who has long held a grudge against the gods for their treatment of her — finally has the opportunity come before those gods and read her “complaint,” a book she has been carefully composing over the course of her entire life. It is the mythology she has created of her experience, the story she tells herself, the persona she has created to present to the world. But in the presence of the eternal, she realizes that her once-great work is now “a little, shabby, crumpled” parchment, filled not with her usual elegant handwriting, but with “a vile scribble — each stroke mean and yet savage.” This is her true self, her true voice, stripped of all the delusions and lies she has been hiding behind all those years.

Lewis is unflinching in his depiction of Orual’s metaphorical “unveiling.” And therein lies the novel’s lasting power. Narnia, Schmarnia; those books are highly over-rated. For once, Lewis achieved the essence of myth without lapsing into the cheap  didacticism that characterizes so much of his overtly Christian writing. Why hasn’t someone made the film version of Till We Have Faces? The same over-arching themes are present, but explored in a richer, far less literal (and less overtly Christian) context. Perhaps it is no coincidence that the novel — which Lewis rightly considered his best work — was written in 1955, after he had met and married Joy Davidman. She was his Beatrice, bringing his faith and understanding of mythology (not to mention himself) to a new, deeper level; everything up to that point had been Purgatory, mere pretense, in comparison. Alas, the marriage was short-lived; Joy succumbed to cancer in 1960, and Lewis wrote a wrenching poem in the days before her death, declaring,

… everything you are was making

My heart into a bridge by which I might get back

From exile, and grow man. And now the bridge is breaking.

Joy’s death precipitated a crisis of faith, and while Lewis weathered it and stubbornly clung to belief, I think it is clear from his later writings that he emerged with a deeper kind of faith, something closer to the spirit of mythology than any blind adherence to, or easy acceptance of, conventional religious dogma. He never quite got all the way to true Paradise; he lost his “bridge” midway. But he got farther in his lifetime than many modern believers who might not be quite as willing to ask the hard questions, nor bring the same rigorous, unflinching logic to bear on their faith. (That spotlight is uncomfortably unforgiving, and few of us can wholly withstand the glare.)

There is much to find objectionable in the life and work of C.S. Lewis, if one doesn’t happen to share his religious (or political, or moral) beliefs. But there is also much to praise. Give the man credit for his insights into what seems to be an innate human need to tell stories that make sense of our existence and give it broader meaning. That longing goes beyond the gods of any specific religion, and this is what lifts Till We Have Faces so far above Lewis’ other work and makes it timeless. Like Orual, Lewis’ entire life was spent weaving a “story,” but in the end it was always the same one, worked, and reworked, until he finally managed to hit the truth of the matter and say what he really meant. As Orual concludes in her moment of realization, “I saw well why the gods do not speak to us openly, nor let us answer. Till that word can be dug out of us, why should they hear the babble that we think we mean? How can they meet us face to face, till we have faces?”

When not taking random walks on 3 Quarks Daily, Jennifer Ouellette waxes whimsical on various aspects of science and culture on her own blog, Cocktail Party Physics.

The Best Poems Ever

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

The best poems ever  a collection of poetry’s greatest voices edited by Edric S. Mesmer (Scholastic Inc. 2001).

Of course it can’t be anything of the kind. However, it is no more foolish than Fade To Grey, which is my imaginary name for various anthologies that come to mind. How could ‘best poems ever’ leave out ‘Sir Gawain and the Green Knight’, Goethe and Auden? Then there is the work chosen. Carl Sandburg’s ‘Buffalo Dusk’ sits next to ‘Ode On A Grecian Urn’, and Gertrude Stein’s ‘A Red Hat’ follows Shakespeare’s Sonnet 130. Where is Gwen Harwood? What happened to Hart Crane and Yeats?

However, strange as the contents may be for someone who knows the history of poetry, I can see where the editor is coming from, since this is a booklet designed for younger readers, and readers new to poetry. From that point of view Best poems ever is interesting, especially for younger teenagers at whom Best poems ever seems mainly to be aimed. This publication raises a very important question: how do you go about teaching poetry? Just as it would be wrong to introduce opera to children with Parsifal, so it would be unwise to try for slabs of Paradise Lost or The Cantos with younger readers, although there are always going to be a few who will take to the unlikeliest reading material like ducks to water. Here there is some real depth, plus some effective set pieces of the kind that appeal to younger readers, plus some banalities. All it requires is the right teacher to inculcate habits of critical reading, which can be done over time. Rush jobs don’t work in education. You have to think in year terms, not days or weeks. I can see how a good teacher could use this little edition—seventy-one pages—to get younger readers motivated. I always think longer recitation pieces work well, none of which are included here—Australian ballads, ‘The Pied Piper of Hamelin’, Alfred Noyes’ ‘The Highwayman’ or Eliot’s Old Possum’s Book of Practical Cats. Children enjoy speaking verse aloud and begin to appreciate what language can achieve in poetic form. That is the main thing—to get children enjoying language.

It is not compromising art to put ‘The Highwayman’ before children. It isn’t one of the world’s great poems, but it is certainly well made, with excellent versification, music and rhyme. There is a poem from the Best poems ever which fills the bill to a degree— ‘When Great Dogs Fight’ by Melvin B. Tolson: ‘He padded through the gate that leaned ajar, / Maneuvered toward the slashing arcs of war, / Then pounced upon the bone; and winging feet / Bore him into the refuge of the street. // A sphinx haunts every age and every zone: / When great dogs fight, the small dog gets a bone.’

This poem would work well in class. Next to it is an extract from Shelley’s ‘To A Skylark’. Already you are on much more difficult ground, but it is still a poem that could be usefully looked at in the classroom. All children relate to birds, the idea of freedom, and escape: ‘In the golden light’ning / Of the sunken sun, / O’er which clouds are bright’ning, / Thou dost float and run, / Like an unbodied joy whose race is just begun.’

There is one poem of Emily Dickinson—’My Life had stood—a Loaded Gun’, which puts you into provocative territory, as Dickinson always does. However, young readers enjoy Dickinson on a certain level, just as they take to Robert Frost immediately. ‘The Road Not Taken’ is the poem used here.

Another thing in this edition’s favour is that it includes an equal number of female and male writers. Amongst the women, apart from Stein and Dickinson, there is Lucille Clifton, Anne Bradstreet, Aphra Behn, Lorna Cervantes, Sylvia Plath, Sor Juana Inés de la Cruz, Phillis Wheatley, H.D., Emily Brontë, Gwendolyn Brooks, Barbara Guest, Christina Rossetti, Edna St. Vincent Millay, Elizabeth I, Angelina Grimké, Elizabeth Browning and Marianne Moore. There is more than a touch of the politically-correct about this selection, and some of the poems aren’t up to much, in my opinion, but at least there’s a consciousness about representation. A teacher can do a great deal with these poems. Preparation for future experience comes readily to hand, as in this poem by Elizabeth I whose opening lines read: ‘The doubt of future foes exiles my present joy, / And wit me warns to shun such snares as threaten mine annoy. / For falsehood now doth flow and subject faith doth ebb, / Which would not be if reason ruled or wisdom weaved the web.’

Millay’s ideas don’t seem very interesting, but, once again, younger readers can relate to ‘My heart is warm with friends I make, / And better friends I’ll not be knowing; / Yet there isn’t a train I wouldn’t take, / No matter where it’s going.’

Each person is going to come up with their own anthology of poems, be it for younger readers, or readers generally. Here, gathering work for use in the classroom is the thing, and that is difficult. It’s no good putting together something the size of, say, the Faber Collected Poems of Ted Hughes, which students can’t be expected to lug around with them. Best poems ever isn’t big enough, but it is portable, and small enough to read on demand.

Older teenagers can get into Owen’s ‘Dulce Et Decorum Est’—there’s plenty of material close to hand for consideration there—or Rilke’s ‘Archaic Torso of Apollo’—a pity to have missed the opportunity to put the original German next to the English translation. ‘Dover Beach’ waits with its sober melancholy. Favourites like Thomas’ ‘Do Not Go Gentle Into That Good Night’ and Elizabeth Browning’s Sonnet 43 are good choices since these are two examples of memorable speech, hard to better, which is why they are rightly famous, ‘best’, if you like.

If I was putting together an anthology for use in schools I would do it differently. For one thing, I think it helps to relate some biography and history to place poems in an historical context. Photos of authors as children, and adults, are good too, so that readers realise poets are no different to them. An editor has to have done some hard thinking for teachers, always hard-pressed for time and harassed by the extraordinary demands made on them. You have to provide some work material concerning the poems chosen, and then at least the teacher can choose to use the associated material, or take the lesson along paths they’ve predetermined.

Well, everyone’s a critic. Best poems ever isn’t that, but it makes a stab in the direction of getting together a collection of poems that could be usefully taught in the classroom. At a time when actual study of poetry seems to be diminishing in lieu of rap songs, film scripts, advertising and text messaging, and when textbooks themselves are fast disappearing from the classroom, that is praiseworthy.

Selected Minor Works: Where’s the Philosophy?!

Justin E. H. Smith

(An extensive archive of Justin Smith’s writing can be found at www.jehsmith.com)

Now that I am a tenured professor of philosophy, and thus may resign from service in my profession’s pep squad without fear of losing my salary, I’m going to come right out and say it: after all this time as a student, and then as a graduate student, and then as a professor of philosophy, I still have absolutely no idea what philosophy is, and therefore what it is I am supposed to be doing.  I do not know what the special competences are that I, qua philosopher, am expected to have. It’s clear that I am expected to say “qua” a lot, and to give off other such social cues through language, gesture, and dress.  But it is that thing that I can do because I am a philosopher that a surgeon, or an archeologist, or a thoughtful sales clerk cannot do, because they are not philosophers, that remains elusive.

Well, one might reply, there’s “critical thinking.” But this is something that, in the ideal situation, any active participant in the civic life of a free society would be able to employ in reading the newpaper, listening to the speeches of politicians, etc.  There’s formal logic, but if I agree with Heidegger on anything it is that logic, like shortpants, is for schoolboys.  In the good old days, when one learned anything at all at school, one learned the forms of argumentation, the fallacies together with their Latin names, etc.  This is all really just advanced critical thinking, and if I can see that q follows from p on a symbol-dense page, I still don’t believe that counts as knowing anything.  As Wittgenstein said, everything is left the same.  Finally, of course, there’s the stuff about God and the soul, which used to be the stock-in-trade of philosophy and which philosophy still can’t really dispense with, in spite of its general awkwardness around the topics.  There I am certainly as ignorant as every other human being is and always has been.

I do have some special competences.  For instance, I know how to read Latin, and I use this in my research.  But that doesn’t count as a distinctively philosophical competence, since I could be employing it to read the Pope’s encyclicals, and those sure as hell aren’t philosophy. Some people, unlike me, claim to have distinctively philosophical competences.  ‘Round hiring season, one hears quite a bit about these from young Ph.D.’s without jobs. When philosophy departments run ads in the professional publication for new hires, they ask for candidates with competences in “philosophy of mind,” “philosophy of science,” etc.  I’ve even seen “philosophy of sports and leisure.”  When the candidates come for their interviews, they are asked: “Can you do philosophy of mind?”  And they had better reply: “Oh yes I can. I can do philosophy of mind.”  And then the hopeful young things will go on to list all the other varieties of philosophy they can “do.” Doing these is crucial. These days, one “does” philosophy, and one does not “philosophize.”  Eager young grad students have now sprouted up throughout America who innocently speak of “rolling up their sleeves” and “doing some philosophy” as if this were a group activity facilitated by a hackey-sack or a waterbong.

Now I’ve read countless books filed under “philosophy.”  I’ve thought about what these books have to say, and I’ve written as much as I’ve been able in response.  But I don’t remember ever having “done” philosophy.  I don’t think I belong to the same world as one capable of saying that.

The question lingers, though: is a specialist in “philosophy of mind” comparable to an organic chemist or an archeologist of neolithic burial mounds with respect to some specialized body of expert knowledge?  Perhaps, but this is still not some body of expert knowledge that every philosopher, qua philosopher (there’s that “qua” again), must have, since as I have already said I am a tenured philosopher and I have only an inkling of an idea about it.  It is not that I am not interested in it.  I am about as interested in it as I am in organic chemistry, and rather less than I am in neolithic burial mounds. And, well, vita brevis

So then why not just say that having expert knowledge in philosophy of mind is a sufficient but not necessary condition for being a philosopher, and that there is a cluster of such bodies of expert knowledge, with family resemblances between them, and that is what makes up philosophy?  There are a few problems with this approach.  One is that the millions of scruffy undergraduates cannot be entirely wrong when they see a page of, e.g., Jerry Fodor’s “A Theory of Content” and think to themselves: that’s not philosophy!  The kids want Dasein, and will-to-power, and différance, and other stuff they can be sure they won’t understand.  I am not saying that curriculum decisions should be turned over to the students. That would be a disaster.  But Richard Rorty is at least right to say that what philosophy departments offer fails largely to live up to the sense that newcomers have that the discipline ought to be doing something rather more, well, important.  Another problem with the family-resemblance approach is that there simply are no traits that occur regularly throughout the various subdisciplines. We cannot be a family if it’s not even clear that we’re the same species.

Again, the only common threads seem to be sociological, rather than doctrinal.  We recognize each other by our ability to rattle off the names of philosophy professors who have become major public personalities; to note “where they’re at” now, Harvard, Oxford, etc.; perhaps to mention that we’ve heard how much they get paid.  Reading Brian Leiter’s “Gourmet Report” is particularly helpful for generating this sense of cohesion, and anyone aspiring to join the club would do well to study it.  Learn the cues.  Get remarked –to use Pierre Bourdieu’s sardonic term to describe the autoreproduction of homo academicus— by someone who’s been remarked in the Gourmet Report, and you’re well on your way to being a remarkable philosopher.

The long war between the “analytic” and “continental” philosophers, too, has more to do with the sociology of groups than with beliefs. “Continental” philosophers go to their own conferences, where they tend to pick up the same speech habits, even the same distinctive North American Continental Philosophy accent.  They tend to say “imbricate” a lot, which sounds a good deal more precious in English than it does in the French from which it is lifted, but the majority of “continental” philosophers do not speak French.  Analytic philosophers have moved over the past few decades from a demand for “rigor” to an interest in being, like Donald Davidson, “charitable.”  They have also gone more postmodern than they like to imagine, and nowadays before they claim anything, in writing or in conference, they describe to you the “story” they’re about to “tell.”

There is also professional humor, of course, as an important factor in giving philosophers a feeling of belonging to a community.  For the most part, though, it is about as funny as the slogans accompanying images of cats wearing sunglasses that one often find in secretaries’ cubicles.  It is palliative, occupational humor, like Dilbert, or like a bumpersticker on a union van that reads “Electricians Conduit Better”: a futile effort to overcome the poverty of a life that has been reduced to and identified with the career that sustains it.

But clearly there’s some common ground that is truly philosophical, isn’t there?  Brains in vats?  Moral dilemmas involving railway switching stations?  These topics do come up, but I must say I think about them as little as possible.  My own work is on the problem of sexual generation as it was understood and debated by what used to be called “natural philosophers” in the period extending from roughly 1550 to 1700.  No brains in vats, not even any trains, let alone switching stations.

Recently, most of my reading has consisted in 16th-century botanical compendia, or, as the Germans call them, Kräuterbücher. I am permitted to work on this topic, as a philosopher, because as a matter of historical fact many of the people who cared in the period about the topic that interests me today happen to be recognized, today, as “philosophers”: Descartes, Leibniz, and so on.  Thank God for them. Their shout-outs to, e.g., Antoni van Leeuwenhoek, who did not go down in history as a philosopher, permit me, as a philosopher, to read his work on the microscopical study of fleas’ legs and on the composition of semen.  And he’s fascinating. 

What used to be called “natural philosophy” and has since been parted out into the various science departments is, in general, fascinating.  It asks whether frogs emerge de novo from slime, and whether astral influx is responsible for the growth of crystals.  I know in advance the answer to both of these questions, but I can’t shake the feeling that reading these texts, and witnessing their authors struggling with these questions, is more edifying, and more important, than  seeking to solve the problems that happen to be on the current disciplinary agenda.

Of course, as Steven Shapin –that truly brilliant outside observer of philosophy’s “doings”– has said, anti-philosophy, like philosophy, is the business of the philosophers.  Periodically, after a long spell of failed system-building and bottom-heavy foundationalism, some guy comes along with a Ph.D. in philosophy and says: Philosophy! Who needs it!  Rorty is a good recent example, though certainly just the latest in a long line.  Diogenes of Sinope, in his own way, eating garbage and pleasuring himself in the agora, was out to show what a waste of time it is to theorize instead of simply to live, to live!  There are plenty of people who go much further, such as those who drop out of grad school after one semester because they got a B+ they didn’t like, and go into investment banking and spend their lives berating those who waste theirs in the Ivory Tower.  Now that is anti-philosophy. Rorty and Diogenes, on the other hand, remain susceptible to Shapin’s jab. They are insiders, and their denunciations only work because their social identities were already secured through a demonstration of concern for and interest in philosophy.

I do not know that I would like to join them.  I think I would sooner choose to masturbate at the mall than hope to take on Rorty’s establishment-gadfly role.  I think I would just like to keep writing about what interests me, without being asked, as I all too often am by the short-sighted philosophical presentists who hear of my various research concerns: Where’s the philosophy?!  For that is precisely the question I have been asking of them.

Monday Musing: The Palm Pilot and the Human Brain, Part III

Part III: How Brains Might Work, Continued…

180pxpalmpilot5000_2In Part I of this twice-extended column, I tried to explain how it is that very complex machines such as computers (like the Palm Pilot) are designed and built by using a hierarchy of concepts and vocabularies. I then used this idea to segue into how attempts to understand the workings of the brain must reverse-engineer the design which has been provided by natural selection in that case, and in Part II, began a presentation of an interesting new theory of how the brain works put forth in his book On Intelligence by the inventor of the Palm Pilot, Jeff Hawkins, who is also a respected neuroscientist. Today, I want to wrap up that presentation. While it is not completely necessary to read Part I to understand what I will be talking about today, it is necessary to read at least Part II. Please do that now.

Last time, at the end of Part II, I was speaking of what Hawkins calls invariant representation. This is what allows us, for example, to recognize a dog as a dog, whether it is a great dane or a poodle. The idea of “dogness” is invariant at some level in the brain, and it ignores the specific differences between different breeds of dog, just as it would ignore the specific differences in how the same individual dog, Rover say, is presented to our senses in different circumstances, and would recognize it as Rover. Hawkins points out that this sense of invariance in mental representation has been remarked for some time, and even Plato’s theory of forms (if stripped of its metaphysical baggage) can be seen as a description of just this sort of ability for invariant representation.

This is not just true for the sensory side of the brain. The same invariant representations are present at the higher levels of the motor side. Imagine signing your name on a piece of paper on a two inch wide space. Now imagine signing your name on a large blackboard so that your signature sprawls several feet across it. Despite the fact that completely different nerve and muscle commands are used at the lower levels to accomplish the two tasks (in the first case, only your fingers and hand are really moving while in the second case those parts are held still while your whole arm and other parts of your body move), the two signatures will look very much the same, and could be easily recognized as your signature by an expert. So your signature is represented in an abstract way somewhere higher up in your brain. Hawkins says:

Memories are stored in a form that captures the essence of relationships, not the details of the moment. When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form.It is the invariant form that is stored in memory, and it is the invariant form of each new input pattern that it gets compared to. Memory storage, memory recall, and memory recognition occur at the level of invariant forms. There is no equivalent concept in computers. (On Intelligence, p. 82)

We’ll be coming back to invariant representations later, but first some other things.

PREDICTION

Jeff_hawkins_on_stageImagine, says Jeff Hawkins, opening your front door and stepping outside. Most of the time you will do this without ever thinking about it, but suppose I change some small thing about the door: the size of the doorknob, or the color of the frame, or the weight of the door, or I add a squeak to the hinges (or take away an existing squeak). Chances are you’ll notice right away. How do you do this? Suppose a computer was trying to do the same thing. It would have to have a large database of all the door’s properties, and would painstakingly compare every property it senses with the whole database, but if this is how our brains did it, then, given how much slower neurons are than computers, it would take 20 minutes instead of the two seconds that it takes your brain to notice anything amiss as you walk through the door. What is actually happening at all times at the lower level sensory portions of your brain is that predictions are being made about what is expected next. Visual areas are making predictions about what you will see, auditory areas about what you will hear, etc. What this means is that neurons in your sensory areas become active in advance of actually receiving sensory input. Keep in mind that all this occurs well below the level of consciousness. These predictions are based on past experience of opening the door, and span all your senses. The only time your conscious mind will get involved is if one or more of the predictions are wrong. Perhaps the texture of the doorknob is different, or the weight of the door. Otherwise, this is what the brain is doing all of the time. Hawkins says the primary function of the brain is to make predictions and this is the foundation of intelligence.

Even when you are asleep the brain is busy making its predictions. If a constant noise (say the loud hum of a bad compressor in your refrigerator) suddenly stops, it may well awaken you. When you hear a familiar melody, your brain is already expecting the next notes before you hear them. If one note is off, it will startle you. If you are listening to a familiar album, you are already expecting the next song as one ends. When you hear the words “Please pass the…” at a dinner table, you simultaneously predict many possible words to follow, such as “butter,” “salt,” “water,” etc. But you do not expect “sidewalk.” (This is why a certain philosopher of language rather famously managed to say “Fuck you very much” to a colleague after a talk, while the listener heard only the expected thanks.) Remember, predictions are made by combining what you have experienced before with what you are experiencing now. As Hawkins puts it:

These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence. (Ibid, p. 104)

HOW THE CORTEX WORKS

Let us focus on vision for a moment, as this is probably the best understood of the sensory areas of the brain. Imagine the cortex as a stack of four pancakes. We will label the bottom pancake V1, the one above it V2, the one above that V4, and the top one IT. This represents the four visual regions involved in the recognition of objects. Sensory information flows into V1 (over one million axons from your retinas feed into it), but information also flows down from regions to the one below. While parts of V1 correspond to parts of your visual field in the sense that neurons in a part of V1 will fire when a vertain feature (say an edge) is present in a certain part of the retina, at the topmost level, IT, there are cells which become active when a certain object is anywhere in your visual field. For example, a cell may only fire if there is a face present anywhere in your visual field. This cell will fire whether the face is tilted, seen at an angle, light, dark, whatever. It is the invariant representation for “face”. The question, obviously, is how to get from the chaos of V1 to the stability of the representation at the IT level.

The answer, according to Hawkins, lies in feedback. There are as many or more axons going from IT to the level below it, as there are in the upward direction (feedforward). At first people did not pay much attention to these feedback connections, but if you are going to be making predictions, then you are going to have to have axons going down, as well as up. The axons going up carry information on what you are seeing, while the axons going the other way carry information on what you expect to see. Of course, exactly the same thing occurs in all the sensory areas, not just vision. (There are also association areas even higher up which connect one sense to another, so that, for example, if I hear my cat meowing and the sound is approaching from around the corner, then I expect to see it in the next instant.) Hawkins’s claim is that there is a sort of invariant representation at each level of the cortex, of the more fragmented sensory input from the level below. It is only when we get to the levels available to consciousness like IT that we can give these invariant representations easily understood names like “face.” Nevertheless, V2 forms invariant representations of what V1 is feeding it, by making predictions of what should come in next. In this way, each level of cortex develops a sort of vocabulary in terms that are built upon repeated patterns from the layer below. So now we see that the problem was not how to construct invariant representations in IT, like “face,” from the three layers below it. Rather, each layer forms invariant representations based on what comes into them. In the same way, association layers above IT may make invariant representations of objects based on the input of multiple senses. Notice that this also goes along well with Mountcastle’s idea that all parts of the cortex basically do the same thing! (Keep in mind that this is a simplified model of vision, ignoring much complexity for the sake of for expository convenience.)

In other words, every single cortical region is doing the same thing: it is learning sequences of patterns coming in from the layer below and organizing them into invariant representations that can be recalled. This is really the essense of Hawkins’s memory-prediction framework. Here’s how he puts it:

Each region of cortex has a repertoire of sequences it knows, analogous to a repertoire of songs… We have names for songs, and in a similar fashion, each cortical region has a name for each sequence it knows. This “name” is a group of cells whose collective firing represents the set of objects in the sequence… These cells remain active as long as the sequence is playing, and it is this “name” that gets passed up to the next region in the hierarchy. (Ibid. p. 129)

This is how greater and greater stability is created as we move up in the hierarchy, until we get to stages which have “names” for the common objects of our experience, and which are available to our conscious minds as things like “face.” Much of the rest of the book is spent on describing details of how the cortical layers are wired to make all this feedforward and feedback possible, and you should read the book if you are interested enough.

HIERARCHIES AGAIN

As I mentioned six weeks ago when I wrote Part I of this column, complexity in design (whether done by humans or by natural selection) is achieved through hierarchies which build layer upon layer of complexity. Hawkins takes this idea further and says that the neocortex is built as a hierarchy because the world is hierarchical, and the job of the brain, after all, is to model the world. For example, a person is usually made of a head, torso, arms, legs, etc. The head has eyes, a nose, a mouth, etc. A mouth has lips, teeth, and so on. In other words, since eyes and a nose and a mouth occur together most of the time, it makes sense to give this regularity in the world (and in the visual field) a name: “face.” And this is what the brain does.

Have a good week! My other Monday Musing columns can be seen here.

Monday, May 1, 2006

Monday Musing: What Wikipedia Showed Me About My Family, Community, and Consensus

Like a large number of people, I read and enjoy wikipedia. For many subjects on which I need to quickly get a primer, it’s good enough, at least for my purposes. I also just read it to see the ways the articles on some topics expand (such as Buffy the Vampire Slayer), but mostly to see how some issues cease to be disputed over time and congeal (the entries on Noam Chomsky are a case in point), and to witness the institutionalization of what was initially envisioned to be an open and rather boundless form (in fact there’s a page on its policies and guidelines with a link to a page on how to propose policies). For someone coming out of political science, it’s intriguing.

To understand why, just look at wikipedia’s “Official Policy” page.

Our policies keep changing, and their interpretation as well. Hence it is common on Wikipedia for policy itself to be debated on talk pages, on Wikipedia: namespace pages, on the mailing lists, on Meta Wikimedia, and on IRC chat. Everyone is welcome to participate.

While we try to respect consensus, Wikipedia is not a democracy, and its governance can be inconsistent. Hence there is disagreement between those who believe rules should be explicitly stated and those who feel that written rules are inherently inadequate to cover every possible variation of problematic or disruptive behavior.

In either case, a user who acts against the spirit of our written policies may be reprimanded, even if no rule has technically been violated. Those who edit in good faith, show civility, seek consensus, and work towards the goal of creating a great encyclopedia should find a welcoming environment.

It’s own self-description points to the complicated process, the uncertainties, and tenuousness of forming rules to making desirable outcomes something other than completely random. Outside of the realm of formal theory, how institutions create outcomes, especially how they interact with environmental factors, cultural elements, psychology is, well, one of the grand sets of questions that constitute much of the social sciences. All the more complicating for wikipedia is that fifth key rule or “pillar” is that “wikipedia doesn’t have firm rules”.

Two of these rules or guidelines have worked to create an odd effect. The first is a “neutral point of view”, by which wikipedia (which reminds us that it is not a democracy) means a point of view “that at is neither sympathetic nor in opposition to its subject. Debates are described, represented, and characterized, but not engaged in.” The second is “consensus”. The policy page on “consensus” is short. It largely discusses what “consensus” is not.

“Consensus” is, of course, a tricky concept when flushed out. To take a small aspect, people in agreement need not have the same reasons or reasons of equal force. Some may agree that democracy is a good thing because anything else would require too much time and effort in selecting the smartest, most benevolent dictator, etc., and another may believe that democracy is a good thing because it represents a polity truly expressing a collective and autonomously formed judgment. Sometimes, it means not just agreeing on positions, but also on reasons and the steps between the two. In wikipedia’s case, it seems to consist of reducing debate to “x said”-“y said” pairs and an enervation of issues that are points of deep disagreement.

One interesting consequence has been that the discussion pages, free of the “neutral point of view” and “consensus” requirements, have become sites of contest, often for “cites of contest”. Perhaps more interestingly, they unintentionally demonstrate what can emerge in an open discussion without the neutrality and consensus constraints.

180pxnasrani_menorahjpgI was struck by this possibility a few weeks ago when I was looking up Syrian Orthodox Christians, trying to unearth some information on the relationship between two separate (sub?)denominations of the church. The reason is not particularly relevant and had more to do with curiosity about different parts of my family and the doctrinal and political divides among some of them. (We span Oriental Orthodox-reformed, Oriental Orthodox, and Eastern Catholic sects and it gets confusing who believes what.)

While looking up the various entries on the Syrian Orthodox Church and the Syro-Malabar Catholic Church, I came across a link to an entry on Knanayas. Knanayas are a set of families, an enthic (or is it sub-ethnic?) community within the various Syriac Nasrani sects in South India, and to which I also belong.

The entry itself was interesting, at least to me.

Knanaya Christians are descendants of 72 Judeo-Christian families who migrated from Edessa (or Urfa), the first city state that embraced Christianity, to the Malabar coast in AD 345, under the leadership of a prominent merchant prince Knai Thomman (in English, Thomas of Cana). They consisted of 400 people men, women and children, from various Syriac-Jewish clans…Before the arrival of the Knanaya people, the early Nasrani people in the Malabar coast included some local converts and largely converted Jewish people who had settled in Kerala during the Babylonian exile and after…The Hebrew term Knanaya or K’nanaim, also known as Kanai or Qnana’im, (for singular Kanna’im or Q’nai) means “Jealous ones for god”. The K’nanaim people are the biblical Jews referred to as Zealots (overly jealous and with zeal), who came from the southern province of Israel. They were deeply against the Roman rule of Israel and fought against the Romans for the soverignity of the Jews. During their struggle the K’nanaim people become followers of the Jewish sect led by ‘Yeshua Nasrani’ (Jesus the Nazarene).

Some of history I’d known; other parts such as being allegedly descendants of the Qnana’im, I did not. Searching through the pages on the topics, what struck me most was nothing on the entry pages, but rather a single comment on the discussion pages.180pxkottayam_valia_palli02 It read:

I object to the Bias of this page. We Knanaya are not all Christians, only the Nasrani among us are Christians. Can you please tone down the overtly Christian propaganda on this page and focus more on us as an ethnic group. Thankyou. [sic]

With that line, images of the my family’s community shifted. It also revealed something about the value of disagreement, and not forcing consensus.

Ram, who writes for 3QD, explored multiculturalism, cultural rights, and group conflict in his dissertation. He is fairly critical of the concept and much of the surrounding politics, as I am. Specifically, he doesn’t believe that there are any compelling reasons for using public policy and public effort to preserve a culture, even a minority culture under stress. For a host of reasons, some compelling, Ram believes that minority cultures can reasonably ask for assistance for adjustment, but cannot reasonably ask the rest to preserve their way of life. One which he offers, one with which I agree, is that a community is often (perhaps eternally) riddled with conflicts about the identity, practices and makeup of the community itself. These conflicts often reflect distributions of power and resistance, internal majorities and minorities, and movements for reform and reactions in defense of privilege. Any move by public power to maintain a community is to take a side, often on the side of the majority. (Now, the majority may be right, but it certainly isn’t the role of public power to decide.)

But the multicultural sentiment is not driven by a desire to side with established practices within a community at the expense of dissidents and minorities. Rather, it’s driven by a false idea that there’s more consensus that there is within the group. The image is furthered by the fact that official spokesmen, usually religious figures, are seen as the authoritative figures for all community issues and not merely over religious rites, and by the fact that minorities such as gays and lesbians are labeled as shaped or corrupted by the “outside”. Forced consensus in other areas, I suspect, suffers from similar problems.

When articles on wikipedia were disputed more frequently, the discussion pages were, if anything, more filled with debate. Disputes have not been displaced onto discussions pages; and if they’ve become more interesting, it is only relatively so. Since the 1970s, ever since political philosophy, political theory and the social sciences developed an interest in ideal speech situations, veils of ignorance, and deliberation, there’s been a fetish made of consensus. Certainly, talking to each other is generally better than beating up each other, but the idea of driving towards agreement may be doing a disservice to politics itself. It was for that reason I was quite pleased by the non-Christian Knanya charging everyone else with bias.

Happy Monday and Happy May Day.

Old Bev: Global Warning

A17_h_148_22725_1Issues 1-3 of n+1 feature a section titled “The Intellectual Situation” which “usually scrutinizes the products of our culture and the problems of everyday life.”  (A typical scrutiny, from Issue 2: “A reading is like a bedside visit. The audience extends a giant moist hand and strokes the poor reader’s hair.”) But in Issue 4, out today, the magazine’s editors, worried “that our culture and everyday life may not exist in their current form much longer,” take a break from topics like dating and McSweeney’s and devote the section to “An Interruption”: Chad Harbach’s summary of global warming. It’s a startling essay because, unlike writings on the same subject by researchers, politicians, economists, and scientists, Harbach claims absolutely no personal authority and offers little analysis of the particulars of the situation.  Instead, he’s scared, and thinks you should be too.  And you shouldn’t be scared just of the hurricanes, but of the nice days as well:

Our way of life that used to seem so durable takes on a sad, valedictory aspect, the way life does for any 19th-century protagonist on his way to a duel that began as a petty misunderstanding.  The sunrise looks like fire, the flowers bloom, the morning air dances against his cheeks.  It’s so incongruous, so unfair!  He’s healthy, he’s young, he’s alive – but he’s passing from the world.  And so are we, healthy and alive – but our world is passing from us. 

Harbach longs for the days before he knew what carbon dioxide and methane do to our climate; he doesn’t seem to resent the “way of life that used to seem so durable” as much as he does the fact that he knows it is no longer durable, and is forced to watch it progress.  It’s the coupling of access to knowledge and lack of agency that feeds Harbach’s nightmare.  And the nightmare is compelling because it doesn’t come from a journalist who has gone to the ice caps or a scientist who has gone to the ice caps or a politician who has gone to the ice caps.  It comes from a guy who has read about the ice caps on the internet.  It’s as if the 21st century protagonist has Googled his duel and learned the outcome, but must nevertheless continue on his way, unsure when he’ll meet the opponent. 

Or if he’ll meet him at all.  It takes a minimum of 40 years for some burned fuels to affect the climate, Harbach informs us.  In a sense, we’re living our grandfather’s dreams, and dreaming our granddaughter’s days. Where we, in the present, fit in is murky.  How can emergency rhetoric operate in a discussion that holds its outcomes so far in the future, and its causes so far in the past?  Harbach acknowledges that the “long lag is the feature that makes global warming so dangerous,” but his own warning is urgent, finite, and is positioned by his editors as a brief perforation with no past or future.  The essay’s marked as “An Interruption” in the regular “Intellectual Situation,” signaling both that the content is important enough to warrant the reader’s immediate attention, and that that very attention is transient. In Issue 5, the editors imply, “The Intellectual Situation” will return to its usual treatment of “problems of everyday life.”  What Harbach wants, however, is for Global Warming to be the every day problem.  But what language can convey that, when the warning is always about tomorrow?

Global warming certainly isn’t a practical concern for most Americans.  It’s practical to be concerned about events like hurricanes and tornados and floods, but global warming – whether there will be more hurricanes in the next century than in this one –isn’t enough of a practical concern to make any difference in the voting booth.  Of course, gay marriage certainly isn’t a practical concern for most Americans either.  Most Americans aren’t gay, and I can’t think of a single American who would be practically threatened by a gay marriage.  But the language surrounding the issue – one of tangible emergency, one of assault on today’s family – makes the issue practical.  It suggests that the marriages of heterosexual partners are instantly destabilized and undermined at the moment when same sex partners marry.  Political power is gained, in that case, by constructing immediate personal threat. 

Harbach takes an opposite approach – he tries to construct threat by unleashing a torrent of imagined future problems so awful and so overwhelming that they seem present.  It’s a solid strategy because he executes it so well, but my lasting feeling was selfish – I’ll die before the shit hits the fan.  Environment related language rarely confers personal threat.  Guilt, perhaps, but almost never threat.  Environmental Protection Agency?  The environment doesn’t get scared or vote.  Natural Resources Defense Council?  Natural Resources don’t get mad or donate money.  Voters are selfish, and to issue a call to arms about global warming you’ve either got to convince them to care about the earth, care about their grandchildren, or get them nervous about themselves here and now. Bush exploited this last strategy in his State of the Union address when he warned that “America is addicted to oil,” implying a human weakness and illness that had to be cured, and fast.  Addiction is also a personal subject for the President; he is a born again Christian who kicked his booze habit and can therefore kick oil, too.  He held Americans as today’s victims, not the earth. 

Toward the end of his essay, Harbach addresses the “addiction” to oil: “This [the transition to renewable energy] is the responsibility incumbent on us, and its fulfillment could easily be couched in the familiar, voter-friendly language of American leadership, talent, and heroism.”  It’s true, it could be easily couched that way – but what seems to keep Harbach himself up at night are global warming doomsday scenarios, not American heroism. “Addicted to Oil” plays on these nightmares.  Perhaps it’s time that the NRDC and company did too.

Rx: Harvey David Preisler

The Moving Finger writes; and, having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a line,
Nor all your Tears wash out a Word of it
.

Omar Khayyam

Screenhunter_1_9Harvey died on May 19th 2002, at 3:20 p.m. The cause of death was chronic lymphocytic leukemia/lymphoma. Death approached Harvey twice: once at the age of 34 when he was diagnosed with his first cancer, and after years of living under the shadow of a relapse, when he was over the fear, a second and final time 4 years ago. He met both with courage and grace. In these trials, he showed how a man so enthralled by life can be at peace with death. Harvey did not seek refuge in visions of heaven or a life after death. I only saw him waver once. When in 1996, our daughter Sheherzad developed a high fever and a severe asthmatic attack at the age of two, Harvey’s anxiety was palpable. After hours of taking turns in the Emergency Room, rocking and carrying her little body connected to the nebulizer, as she finally dozed off, he asked me to step outside. In the silence of a hot, still Chicago night, he said in a tormented voice, “If something happens to her I am going to kill myself because of the very remote chance that those fundamentalists are right and there is a life after death. I don’t want the little one to be alone”.

Truth is what mattered most to Harvey. He faced it and accepted it. When I would become upset by the intensely painful nature of his illness, Harvey was always calm and matter of fact, “It’s the luck of the draw, Az. Don’t distress yourself over it for a second”. It was an acceptance of the human condition with quiet composure. “We are all tested. But it is never in the way we prefer, nor at the time we expect.” W. B. Yeats was puzzled by the question:

The intellect of man is forced to choose
Perfection of the life, or of work.

Fortunately for Harvey, it was never a question of either or. For him, work was life. Once, towards the end, when I asked him to work less and maybe do other things that he did not have the time for before, his response was that such an act would make a mockery of everything he had stood for and done until that point in his life. Work was his deepest passion outside of the family. Three days before he died, Harvey had a lab meeting at home with more than 20 people in attendance, and he went over each individual’s scientific project with his signature genuine interest and boyish enthusiasm. Even as he clearly saw his own end approach, Harvey was hopeful that a better future awaits other unfortunate cancer victims through rigorous research.

Harvey grew up in Brooklyn and obtained his medical degree from the University of Rochester. He trained in Medicine at New York Hospitals, Cornell Medical Center, and in Medical Oncology at the National Cancer Institute. At the time of his death, he was the Director of the Cancer Institute at Rush University in Chicago and the Principal Investigator of a ten million dollar grant from the National Cancer Institute (NCI) to study and treat acute myeloid leukemias (AML), in addition to several other large grants which funded his research laboratory with approximately 25 scientists entirely devoted to basic and molecular research. He published extensively including more than 350 full-length papers in peer reviewed journals, 50 books and/or book chapters and approximately 400 abstracts.

Harvey loved football with a passion that was only matched by mine for poetry. He was exceedingly anti-social and worked actively to avoid company while I had a considerable social circle and was almost always surrounded by friends and extended family. If you saw the two of us going out to dinner, you would have been confused; I looked dressed for a dinner at the White House while Harvey could have been taking the trash out. We met in March 1977 and did not match in age (I was 24, he was 36), status (I was single and a fresh medical graduate waiting to start my Residency, he was married with three children and the Head of the Leukemia Service), or religion (I was a Shia Muslim, he came from an Orthodox Jewish family, and his grandfather was a Rabbi). Yet, we shared a core set of values that made us better friends than we had ever been with another soul.

Harvey liked to tell a story about his first scientific experiment. He was four years old, living in Brooklyn, and went to his backyard to urinate. To his surprise, a worm emerged from the little puddle. He promptly concluded that worms came from urine. In order to prove his hypothesis, he went back the next day and repeated the experiment. To his satisfaction, another worm appeared from the puddle just as before, providing reproducible proof that worms came from urine, a belief he steadfastly hung on to until he was nine years old. An interesting corollary is the explanation for this phenomenon provided by his then six year old daughter Sheherzad some years ago. As he gleefully recounted his experiment, she pointed out matter-of-factly, “Of course, Daddy, if there were worms living in your favorite peeing spot, they would have to float up because of the water you were throwing on them!

Harvey was an exceptionally gifted child whose IQ could not be measured by the standardized tests that were given to the Midwood High students in Brooklyn. He was experimenting with little chemistry sets, and making home-made rockets at 6 years of age, and had read so much in Biology and Physics that he was excused from attending these classes throughout high school. He decided to study cancer at 15 years of age as a result of an early hypothesis he developed concerning the etiology of cancer, and he never wavered from this goal until he died. Harvey worked with some of the best minds in his field, his mentors included Phil Leder, Paul Marks, Charlotte Friend, Sol Spiegleman and James Holland. Harvey started his career in cancer by conducting pure molecular and cellular research, for a time concentrating on leukemias in rats and mice, but decided that it was more important to study freshly obtained human tumor cells and conduct clinical research since man must remain the measure of all things. Accordingly, he served his patients with extraordinary dedication, consideration, respect and manifested a deep understanding for the unspeakable tragedies they and their families face once a diagnosis of cancer is given to them. Harvey exercised supreme wisdom in dealing with cancer patients as well as in trying to understand the nature of the malignant process. He not only succeeded in providing better treatment options to patients, he also devoted a lifetime to nourishing and training young and hopeful researchers, providing them with inspiration, selfless guidance and protection so they could achieve their potential in the competitive and combative academic world. As a result, he was emulated and cherished enormously as a leader, original thinker, and beloved mentor by countless young scientists and physicians. In acknowledgment of his tireless efforts to inspire and challenge young students, especially those belonging to minority communities, or coming from impoverished backgrounds, Harvey was given the Martin Luther King Junior Humanitarian Award by the Science and Math Excellence Network of Chicago in 2002. Unfortunately, he was too sick to receive it in person, nonetheless, he was greatly moved by this honor.

Harvey traveled extensively to see the works of great masters first hand. He returned to Florence, Milan and Rome on an annual basis for years to see some of his favorites; the statue of Moses; the Unfinished Statues by Michelangelo; the Sistine chapel. He would travel to Amsterdam to visit the Van Gogh Museum, and to Paris so he could show little Sheherzad his beloved Picassos. His three greatest heroes were Moses, Einstein and Freud, and his study in every home we shared (Buffalo, Cincinnati and Chicago) had beautiful framed pictures of all three. Harvey had a curious mind, and read constantly. His areas of interest ranged from Kafka and Borges to physics, astronomy, psychology, anthropology, history, evolutionary biology, complexity, fuzzy logic, chaos, paleoanthroplogy, the American Civil War, theology, politics, biographies, social sciences, to science fiction. His books number in thousands. The breadth of his encyclopedic knowledge in so many areas, combined with his ability to use it in a manner appropriate for the time or to the occasion often astonished and delighted those who had serious discussions with him.

From Mark (Harvey’s son from his first marriage):

Our Dad was not a sentimental man. He was the ever scientist. Emotions clouded reason…and if you cannot see reason you may as well be blind. But Dad did have a side few were lucky enough to see. While he was always practical… He truly was an emotional man. He stood up for his beliefs and he never backed down. One of those beliefs was that it was important to die with dignity. No complaints, despite all the pain. He didn’t want to be a burden to his children or his wife. He never was. Azra said it best: Taking care of him was an honor, never a burden. There’s a Marcus Aurelius quote he often spoke of: “ Death stared me in the face and I stared right back.” Dad, you certainly did.

More than anything our Father was a family man. He cherished us and we cherished him. He often thanked us for all the days and nights spent by his side, but I told him there was no need for thanks. None of us could have been anywhere else. He and I often discussed his illness. He once asked me why he should keep fighting…what good was there in it? I told him his illness had brought our family much closer together. He smiled and said he was glad something good came of it.

Azra, he adored you. He often told me it was love at first sight. You two shared a love that only exists in fairy tales. Dad could be unconscious but still manage a smile when you walked into the room. I have never seen anything like it and I feel privileged to have witnessed your devotion to each other. The way you took care of him is inspiring. You never left his side and you refused to let him give up. No one could have done anything more for him and he knew it. He was very lucky to find you.

While going through his wallet I was shocked to find a piece of paper folded up in the back. On it were two quotes written in his own pen. I’d like to share one with you. “There isn’t much more to say. I have had no joy, but a little satisfaction from this long ordeal. I have often wondered why I kept going. That, at least I have learned and I know it now at the end. There could be no hope, no reward. I always recognized that bitter truth. But I am a man and a man is responsible for himself.” (The words of George Gaylord Simpson). Our Father died Sunday, May 19th at 3:20 in the afternoon. His family lives on with a love and closeness that will make him proud. Pop, we love you. You were our best friend. We will miss you everyday.

And thus Harvey lived, and thus he died. Proud to the end.

Death be not proud, though some have called thee
Mighty and dreadfull, for, thou art not so,
For, those, whom thou think'st, thou dost overthrow,
Die not, poore death, nor yet canst thou kill me.
From rest and sleepe, which but thy pictures bee,
Much pleasure, then from thee, much more must flow,
And soonest our best men with thee doe goe.

–John Donne

Monday, April 24, 2006

Talking Pints: Eurobashing, Some French Lessons

Europe_bI came to the US in 1991, shortly after Francis Fukuyama penned his famous “End of History and the Last Man” essay. Though much contested at the time, Fukuyama’s contention that there was only one option on the menu after the end of the Cold War – capitalism über alles – seemed, from my European social democratic perspective, worryingly prescient. After all, Europe’s immediate policy response was the Maastricht Treaty. Yet a moments reflection should have shown me that there was nothing inevitable about this victory of capitalism. As Karl Polanyi demonstrated, the establishment of capitalism was a political act, not a result of impersonal economic forces. And just as Lenin thought historical materialism needed his helping hand, it was reasonable to suppose that Fukuyama and those following him didn’t want to leave capitalism’s final triumph in Europe to the mere logic of (Hegelian) history. Post Cold War capitalism needed a helping hand in the form of reinforcing a new message: that while some kind of social democratic ‘Third Way’ between capitalism and socialism, the European Welfare State (EWS), was tolerable during the Cold War, now it was over, such projects were no longer desirable, or even possible.

As a consequence, following the Japan-bashing that was so popular in the US in the 1980s, Euro-bashing came to prominence in the 1990s. A slew of research was produced by US authors claiming that in this new world of ‘globalization’, time was up for the ‘bloated welfare states’ of Europe. Unable to tax and spend without provoking capital flight, EWS’s faced the choice of fundamental reform, (become just like the US) or wither and die. Fundamental reform was, of course, some combination of privatization, inflation control, a tight monetary policy, fiscal probity, more flexible labor markets, and of course, tax cuts. Some EWS’s embraced these measures during the 1990s, some did not, but interestingly, none of them died. In contrast to the dire predictions of the Euro-bashers, the ‘bloated old welfare states of Europe’ continued on their way. Such claims for the ‘end of the EWS’ were made consistently, in fact, almost endlessly, from 1991 until today, with apparently no ill effects.

_40989734_carap203x300_1Imagine then the sheer joy of the Euro-bashers upon finding the French (the bête noir of all things American and efficient) rioting the streets to protect their right not to be fired, and in the face of unemployment rates of almost 20 percent for those under 25. Was this not proof that the EWS has finally gone off the rails? John Tierney in the New York Times obviously thought so, arguing that “when French young adults were asked what globalization meant to them, half replied, “Fear.” Likewise Washington Post columnist Robert Samuelson opined, “Europe is history’s has-been. …Unwilling to address their genuine problems, Europeans become more reflexively critical of America. This gives the impression that they’re active on the world stage, even as they’re quietly acquiescing in their own decline.” Strong claims, but how the French employment law debacle was reported in the US was enlightening as the fact that it was given such coverage; not as the final proof of the EWS’s impending collapse, but as evidence of the strange myths, falsehoods, and half-assed reporting about Europe that is consistently passed off as fact in US commentary.

Consider the article by Richard Bernstein in the New York Times entitled “Political Paralysis: Europe Stalls on the Road to Economic Change”. In this piece Bernstein argues that Scandinavian states have managed to cut back social protections and thereby step-up growth, and that Germany under Schroeder managed to push through “a sharp reduction of unemployment benefits” t_1526390_arbeitsamtap300hat “have now made a difference.” Note the causal logic in both statements, if you cut benefits you get growth and employment. The problem is that both statements are flatly incorrect. Scandinavian countries have in many cases increased, rather than decreased, employment protections in recent years, and the German labor market reforms have indeed “made a difference.” German unemployment is now higher than ever, and the German government can cut benefits to zero and it probably will not make much of a difference to the unemployment rate. Unfortunately reporting things in this way wouldn’t signal the impending death of Europe. It wouldn’t fit the script. In fact, an awful of a lot of things about European economies are mis-reported in the US. The following are my particular favorites.

  • Europe is drowning in joblessness
  • Europe has much lower growth than the US
  • European productivity is much lower than that of the US

Let’s take each of these in turn:

Unemployment: It is certainly true that some European states currently have higher unemployment that the US; Germany, France and Italy being the prime examples, and it is commonly held that this is the result of inflexible labor markets. The story is however a bit more complex than this. First of all, European unemployment, if you think about it, is an empty category. When seen across a twenty year period, US unemployment is sometimes lower, sometimes higher than averaged-out European unemployment, and varies most with overall macroeconomic conditions. Consider that modern Europe contains oil rich Norwegians, poor Italian peasants, and unemployable post-communist Poles. The UK was deregulating its financial sector at the same time as Spain and Ireland were shedding agricultural labor. As such, not only is the category of ‘Europe’ empty, to speak of European unemployment is misleading at best.

Moreover, contrary to what Euro-bashers argue, the relationship between labor market flexibility and employment performance appears to run in exactly the opposite direction to that maintained. As David Howell notes, historically, “lower skilled workers in the United States have had…far higher unemployment rates relative to skilled workers than has been the case in…Northern European nations.” If so, one can hardly blame European unemployment on labor market rigidities since no such rigidities applied to these unemployed low-skilled Americans.

Indeed, why the US has a superior unemployment performance may have less to do with ‘flexibility’ and efficiency of labor markets than the US itself admits. Bruce Western and Katherine Beckett argue that “criminal justice policy [in the US] provides a significant state intervention with profound effects on employment trends.” Specifically, with $91 billion dollars spent on courts, police and prisons in contrast to $41 billion on unemployment benefits since the early 1990’s, the United States government distorts the labor market as much as any European state.

Western and Beckett used Bureau of Justice Statistics data to recalculate US adult male employment performance by including the incarcerated in the total labor pool. Taking 1995 as a typical year, the official unemployment rate was 7.1 percent for Germany and 5.6 percent for the US. However, once recalculated to include inmates in both countries, German unemployment rises to 7.4 percent while US unemployment rises to 7.5 percent. If one adds in to this equation the effect, post 9-11, the effects of a half a trillion dollar defense budget per annum, and 1.6 million people (of working age) under arms, then it may well be the case the US’s own labor markets are hardly as free and flexible as its often imagined, or that the causes of low unemployment lie therein.

Growth: Germany and France in particular do have very real problems with unemployment, but it has very little to do with flexibility of labor markets and a lot to do with the lack of growth. Take the case of Germany, the unemployment showcase of Euroland. From the mid 1990s until today its unemployment performance was certainly worse than the US, but it had also just bought, at a hopelessly inflated price, a redundant country of 17 million people (East Germany). It then integrated these folks into the West German economy, mortgaging the costs of doing so all over the rest of Europe via super-high interest rates that flattened Continental growth. Add into this the further contractions caused by adherence by Germany to the sado-monetarist EMU convergence criteria, and follow this up with the establishment of a central bank for all of Europe determined to fight an inflation that died 15 years previously, and yes, you will have low growth and this will impact employment. And yes, it is a self inflicted wound. And no, it has nothing to do with labor markets and welfare states. Germany is not Europe however, and should not be confused with it. The Scandinavian countries have all posted solid growth performances over the past several years, as have many of the new accession states.

Lifepak20assembly20lineProductivity: It is worth noting that a high employment participation rate and long working hours are seen in the US as being a good thing. This is strange however when one considers that according to economic theory, the richer a country gets, the less it is supposed to work. This is called the labor-leisure trade off, which the US seems determined to ignore. That Americans work much more hours than Europeans is pretty much all that explains the US’s superior productivity. As Brian Burgoon and Phineas Baxandall note, “in 1960 employed Americans worked 35 hours a year less than their counterparts in the Netherlands, but by 2000 were on the job 342 hours more.” By the year 2000, liberal regime hours [the US and the UK] were 13 percent more than the social democratic countries [Denmark, Sweden and Norway], and 30 percent more than the Christian Democratic countries [Germany, France, Italy].” Indeed, thirteen percent of American firms no longer give their employees any vacation time apart from statutory holidays. The conclusion? Europe trades off time against income. The US get more plasma TV’s and Europeans get to pick up their kids from school before 7pm. But the US is still more productive – right? Not quite.

Taking 1992 as a baseline year (index 100) and comparing the classic productivity measure – output per employed person in manufacturing – the US posts impressive productivity figures, from an index of 100 to 185.6 in 2004. Countries that beat this include Sweden, the ‘bloated welfare state’ par excellence, with an index value of 242.6. France’s figure of 150.1 is 20 percent less then the US, but considering that the average Frenchman works 30 percent less than the average American, the bad news is that France is arguably just as productive, it just trades-off productivity against time.

Equality and Efficiency: Most importantly, such comparatively decent economic performance has been achieved without the rise in inequality seen in the US. To use a summary measure of inequality, the GINI coefficient, which gauges between 0 (perfect equality) and 1 (perfect inequality), the US went from a GINI of 0.301 in 1979 to a GINI of 0.372 in 1997, a nineteen percent increase. Among developed states, only the UK beats the US in achieving a greater growth in inequality over the same period. While the US and the UK have seen large increases in income inequality, much of Europe has not. France, for example, actually reduced inequality from a GINI of 0.293 to 0.298 from 1979 to 1994. Germany likewise reduced its GINI from 0.271 to 0.264 between 1973 and 2000, as did the Netherlands, which went from 0.260 to 0.248 between 1983 and 1999. Moreover, despite an enormous increase in wealth inequality in the US, redistribution has not been as dramatic in Europe. While wealth inequality has increased in some countries such as Sweden, it has done so from such a low baseline that such states are still far more equalitarian today than the US was at the end of the 1970s. Today, the concentration of wealth in the US looks like pre-war Europe, while contemporary Europe looks more like the post-war United States.

Given all this, why then is Europe given such a bad press? Given space constraints I can only hazard some guesses. The intellectual laziness and lack of curiosity of the US media plays a part, as does the sheer fun of saying “we’re number one!” over and over again, I guess. What is also important is what John R. MacArthur of Harper’s Magazine noted in his response to the Tierney column discussed above; “As Tierney’s ideological predecessor (and former Republican press agent) William Safire well understood, when things get rough for your side, it’s useful to change the subject.”

Given this analysis, Euro-bashing, like Japan-bashing before it, contains within it two lessons. The first that that the desire to engage in such practices probably signals more about the state of the US economy than it does about the economy being bashed. Second, that while Europe does indeed have some serious economic problems, the usual suspects accused of causing these problems are really quite removed from the scene of the crime.

Sojourns: True Crime 2

Dukelacrosseeveryone_2Rape is unique among crimes because its investigation so often turns on the question of whether a crime actually happened. Was there or was there not a rape, did she or did she not consent, was she or was she not even able to consent? These sorts of questions are rarely asked about burglary or murder. And rarely do those accused of burglary or murder respond that such crimes didn’t happen (OJ didn’t say Nicole wasn’t killed, just that he didn’t do it). Most criminal investigations accordingly turn up a culprit who then defends him or herself by saying that he or she did not commit the crime. In contrast, most widely publicized rape cases involve culprits denying that a crime took place. She was not raped; we had consensual sex. Or, she was not raped; we didn’t have sex at all.  And so therefore despite the best intentions of state legislatures and women’s advocacy groups, the prosecution of rape cases still often turns its attention to the subjective state of the victim. She consented at the time and now has changed her mind. She has made the entire story up out of malice or revenge or insanity.

Duke5_1The ongoing story at Duke University exacerbates these basic features of rape law in several respects. Most obviously, it places the ordinary uncertainty of the case in the whirlwind of publicity. As is often so in rape cases, the story is at root about whether or not a crime happened. Every detail of the incessant reporting has circulated around and over a core piece unknown data: not whether the woman consented to have sex, but whether anyone actually had sex at all. All of the attention paid to the DNA testing in the early stages of the investigation was in the hope that this question might be answered. Human testimony is fallible. Science is not. Or so we told by shows like CSI, with their virtuoso forensic detectives. And so we are led to believe by the well-publicized use of DNA testing in recent years to exonerate and incriminate defendants past and present. As it turns out, however, the DNA testing in this case only adds to the uncertainty. According to the District Attorney most rape cases involve no DNA at all. We thus await the evidence of her body itself, the sort of specific damage wrought by forcible sex. Her body will tell us the truth, and so get us out of the back and forth of merely verbal accusations and denials.

Duke_2_1Even this highly pitched sense of mystery and uncertainty is ultimately the ordinary stuff of well-publicized rape cases. Were the story only about crime or no-crime, the American desire for closure and distaste for the open-ended or the unsure would eventually kill off interest. What is distinctive about the Duke story is the particularly delicate politics of race and the specific context of college athletics. About the former, little more need be said than the obvious. The story is about a twenty-seven year old African American mother working her way through a historically black college who has accused three white students from the nearby elite university of rape. On this accusation rest several hundred years of history. Were this not so highly charged an accusation, the defendants’ strategy would surely be more corrosive than it has thus far been. Rape law places unusual and often unpleasant (and unfair) burden on the subjective position of the victim, on her sense of her own consent or her reliability as a witness. Thus Kobe Bryant was exonerated because the accuser was traduced in public. We have (thankfully) seen little of this so far in the Duke case, even though the accuser is an exotic dancer with a criminal record who worked for an escort service. The predictable course of events would be for the defendants to claim the accuser is deranged and unreliable and, as far as possible under state law, to bring in the shadier aspects of the woman’s employment and criminal history to do so. That this hasn’t happened, or hasn’t happened yet, is revealing about the way in which race works in public discourse.

Of course the story drew the kind of attention that it did at first not because the accuser was black but because the accused were lacrosse players at a major university. What has emerged is something like the dirty secret of athletics at an elite institution. Like Stanford or Michigan, Duke has always maintained a double image as at once an extremely selective, prestigious institution of higher-learning and a powerhouse in several key sports (especially basketball). Unlike the Ivy League, Duke and Stanford actively recruit and provide full athletic scholarships for athletes. They also maintain a vigorous booster culture of fans and alumni. The result is a separate culture for “student” athletes, who don’t really have to take the same classes as everyone else, and who are apparently coddled in lifestyles of abuse and debauchery.

Duke22The agreed-upon facts of the case ought to be seen in this light. The lacrosse team threw a party for themselves and hired two exotic dancers from a local escort service. The dancers arrived and performed their routine surrounded by a ring of taunting and beer-drinking men. Alone and without security, they complained of their treatment and left. They were coaxed back inside. One claims to be have been raped. Whatever sexual assault may or may not have taken place, the facts of the case are set against the backdrop of an aggressive Neanderthalism that is precisely the sort of thing a university should be designed to counter.

As with most accusations of rape, the legal case is certain to revolve around the question of whether a crime happened. The coverage will most likely turn to a predictable discussion of credibility combined with new revelations about the accuser and defendants’ relative truthfulness. One shouldn’t forget what this case has already revealed.

Temporary Columns: Nationalism and Democracy

I was invited by Dr. Luis Rodriguez-Pineiro to give a lecture at his class on the History of Law at the Universidad de Sevilla. Dr. Pineiro works on indigenous rights and has just published a book, “Indigenous Peoples, Postcolonialism and International Law”. He asked me to speak on issues related to national identity and political democracy. I have, like many of us, struggled with these issues, intellectually and politically. Why is it hard to avoid discussions of ethnonational identity when we talk about political democracy? Why do those who advocate nationalism, particularly a nationalism of the ethnic variety, tend to politically persist, if not out-maneuver, those who advocate a more neutral form of political community when it comes to defining the state? Or more simply, why is it that it is hard for us to avoid some allusion to national culture in our discussions of political community.

Democracy is a theory about how we ought to treat each other if we live in the same political community. It describes the rules through which we may engage with each other, i.e., the powers our rulers may have over us, and the rights we may have against them. These are well developed and argued in democratic theory. These powers are very familiar to most of us – the basic rights of expression, association and conscience. The right to vote and elect representatives of our choice who may form a government. Political thinkers have given these issues much thought. They have described and argued in great detail how we ought to regulate ourselves politically and what claims we may make against each other or the state. We may indeed differ about the nature of these powers – libertarians might think all that is required is to protect some basic liberties. Social democrats may argue that what we need is a state that taxes the rich and transfers money to the poor. Whatever their disagreements – which are indeed plenty – libertarians and social democrats do not disagree that what they are talking about is the political regulation of the relationship among citizens within a political community.

While they have well developed theories and debates about internal regulation of a political community, neither social democrats nor libertarians have anything close to a theory about the boundaries of a political community. Their theories developed over hundreds of years fail to tell us what the limits of a political community are. For example, if Sri Lanka and India are indeed democracies, why shouldn’t they be one country ruled from Colombo? This is where nationalism comes in.

Nationalism is a theory about the boundaries of the political community, i.e., who is in and who is out. Nationalism argues that the political community, if it is not to be simply an accident of history or an agglomeration of unconnected social groups, needs to be based on something more. That something more is the way of life of a group of people, defined by language, religion, region or culture. This is a way of life or culture of a political community that precedes the political community on which it is based. Of course nationalist theories differ on what ought to form the basis of the political community. The Zionists, the Wahhabis, and the Hindutvas, believe that it should be religion. The Catalans, the Tamils, and the French believe it should be language, and so on. Whatever the problems with these efforts at constructing a political community, they do have some theory about the boundaries of such a community. But nationalism has no theory about the rules and regulations that govern the interaction among members of a political community. These members could live in a dictatorship, a democracy or even a monarchy.

As social democrats who believe in combining social equality and political freedom, we have an inadequate answer to the question of whom we should share this freedom and equality with. One answer, the world, is insufficient. It is too vague and abstruse, because it allows to us to get away from the actual concrete commitments – such as taxing and redistributing – that is required by such sharing. The other answer – we should share with those who are either like or close to us seems both too concrete and too narrow. Should it be with those who speak like us, live near us and look like us? We are uncomfortable with this response because the instinct animating it seems to foster intolerance and inequality.

So whether we like it or not, nationalism finds a way to creep into our theories of political democracy because of the silence of political theory about the boundaries of a political community. As a political theorist, I am troubled by this silence intellectually and may look for answers to it. As a political activist I am sympathetic to this silence, wish to nurture it, and maybe even require it of my fellow citizens. I am wary that probing it too much may lead to the kind of answers that make it harder for me to make the case for sharing power, wealth, and space with those who happen to live together with me in the same political community as citizens, even if they do not look like me, speak the same language, and pray to the same gods.

monday musing: minor thoughts on cicero

Cicero may very well have been the first genuine asshole. He wasn’t always appreciated as such. During more noble and naïve times, people seem to have accepted his rather moralistic tracts like ‘On Duties’ and ‘On Old Age’ as untainted wisdom handed down through the eons. This, supposedly, was a gentle man doing his best in a corrupted age. It was easier to palate that kind of interpretation during Medieval and early Renaissance times because many of his letters had been lost and forgotten. But Petrarch found some of them again around 1345 and the illusion of Cicero’s detached nobility became distinctly more difficult to pass off. Reading his letters, you can’t help but feel that Cicero really was a top-notch asshole. He schemed and plotted with the best of them. His hands were never anything but soiled.

Now, I think it may be clear that I come to praise Cicero, not to bury him. Even in calling him an asshole I’m handing him a kind of laurel. Because it is the particular and specific way that he was an asshole that picks him up out of history and plunks him down as a contemporary, as someone even more accessible after over two thousand years than many figures of the much more recent past. Perhaps this is a function of the way that history warps and folds. The period of the end of the Roman Republic in the last century BC speaks to us in ways that even more recent historical periods do not. Something about its mix of corruption and verve, cosmopolitanism and rank greed, self-destructiveness and high-minded idealism causes the whole period to leap over itself. And that is Cicero to a ‘T’. He is vain and impetuous, self-serving and conniving. He lies and cheats and he puffs himself up in tedious speech after tedious speech. It’s pretty remarkable. But he loved the Republic for what he thought it represented and he dedicated his life, literally, to upholding that idea in thought and in practice.

In what may be the meanest and most self-aggrandizing public address of all time, the Second Philippic Against Antony, Cicero finds himself (as usual) utterly blameless and finds Antony (as usual) guilty of almost every crime imaginable. It’s a hell of a speech, called a ‘Philippic’ because it was modeled after Demosthenes’ speeches against King Philip of Macedon, which were themselves no negligible feat in nasty rhetoric.

One can only imagine the electric atmosphere around Rome as Cicero spilled his vitriol. Caesar had only recently been murdered. Sedition and civil war were in the air. Antony was in the process of making a bold play for dictatorial power. Cicero, true to his lifelong inclinations, opposes Antony in the name of the restoration of the Republic and a free society. In his first Philippic, Cicero aims for a mild rebuke against Antony. Antony responds with a scathing attack. This unleashes the Second Philippic. “Unscrupulousness is not what prompts these shameless statements of yours,” he writes of Antony, “you make them because you entirely fail to grasp how you are contradicting yourself. In fact, you must be an imbecile. How could a sane person first take up arms to destroy his country, and then protest because someone else had armed himself to save it?”

Cicero’s condescension is wicked. “Concentrate, please—just for a little. Try to make your brain work for a moment as if you were sober.” Then he gets nasty. Of Antony’s past: “At first you were just a public prostitute, with a fixed price—quite a high one too. But very soon Curio intervened and took you off the streets, promoting you, you might say, to wifely status, and making a sound, steady, married woman of you. No boy bought for sensual purposes was ever so completely in his master’s powers as you were in Curio’s.”

Cicero finishes the speech off with a bit of high-minded verbal self-sacrifice:

Consider, I beg you, Marcus Antonius, do some time or other consider the republic: think of the family of which you are born, not of the men with whom you are living. Be reconciled to the republic. However, do you decide on your conduct. As to mine, I myself will declare what that shall be. I defended the republic as a young man, I will not abandon it now that I am old. I scorned the sword of Catiline, I will not quail before yours. No, I will rather cheerfully expose my own person, if the liberty of the city can be restored by my death.

May the indignation of the Roman people at last bring forth what it has been so long laboring with. In truth, if twenty years ago in this very temple I asserted that death could not come prematurely upon a man of consular rank, with how much more truth must I now say the same of an old man? To me, indeed, O conscript fathers, death is now even desirable, after all the honors which I have gained, and the deeds which I have done. I only pray for these two things: one, that dying I may leave the Roman people free. No greater boon than this can be granted me by the immortal gods. The other, that every one may meet with a fate suitable to his deserts and conduct toward the republic.

If the lines are a bit much, remember that Cicero was to be decapitated by Antony’s men not long afterward, and, for good measure, to have his tongue ripped out of his severed head by Antony’s wife, so that she might get final revenge on his powers of speech. It’s not every asshole that garners such tributes.

***

Around the time that he re-discovered some of Cicero’s letters, Petrarch started writing his own letters to his erstwhile hero. In the first, Petrarch writes,

Of Dionysius I forbear to speak; of your brother and nephew, too; of Dolabella even, if you like. At one moment you praise them all to the skies; at the next fall upon them with sudden maledictions. This, however, could perhaps be pardoned. I will pass by Julius Caesar, too, whose well-approved clemency was a harbour of refuge for the very men who were warring against him. Great Pompey, likewise, I refrain from mentioning. His affection for you was such that you could do with him what you would. But what insanity led you to hurl yourself upon Antony? Love of the republic, you would probably say. But the republic had fallen before this into irretrievable ruin, as you had yourself admitted. Still, it is possible that a lofty sense of duty, and love of liberty, constrained you to do as you did, hopeless though the effort was. That we can easily believe of so great a man. But why, then, were you so friendly with Augustus? What answer can you give to Brutus? If you accept Octavius, said he, we must conclude that you are not so anxious to be rid of all tyrants as to find a tyrant who will be well-disposed toward yourself. Now, unhappy man, you were to take the last false step, the last and most deplorable. You began to speak ill of the very friend whom you had so lauded, although he was not doing any ill to you, but merely refusing to prevent others who were. I grieve, dear friend at such fickleness. These shortcomings fill me with pity and shame. Like Brutus, I feel no confidence in the arts in which you are so proficient.

Indeed, it seems that Cicero was just a fickle man looking out for Number One, and maybe he’d stumble across a little glory in the process. Still, even that isn’t entirely fair. As Petrarch admits in his disappointed letter, some concept of the Republic and human freedom was driving Cicero all along. But the Republic was always a sullied thing, even from the beginning. The concept of freedom was always mixed up with self-interest and the less-than-pure motivations of human creatures. Cicero got himself tangled up in the compromised world of political praxis precisely because he was uninterested in a concept of freedom that hovered above the actual world with practiced distaste and a permanent scowl. I like to think of him as an asshole because I like to think of him as one of us, neck-deep in a river of shit and trying his best to find a foothold, one way or another. Dum vita est, spes est (‘While there’s life, there’s hope’).

Monday, April 17, 2006

Lunar Refractions: “Our Biggest Competitor is Silence”

I really wish I had the name of the Muzak marketer who provided this quote as it appeared in the 10 April issue of the New Yorker magazine. Silence is one of my dearest, rarest, companions, and this marketer unexpectedly emphasized its power by crediting it as the corporation’s chief competitor—no small role for such a subtle thing.

My initial, instinctual, and naturally negative reply was that, though this claim might be comforting to some, it’s also dead wrong. In most places, silence lost the battle long ago. A common strain that now unites what were once very disparate places and cultures seems to be the increasing endangerment—and in some cases extinction—of silence. I think about this a lot, especially living in a place where for much of the day loud trucks idle at length below my apartment, providing an aggravating background hum that I’ve never quite managed to relegate to the background. I lost fifteen minutes the other day fuming about the cacophonous chorus of car alarm, cement truck, and blaring car radio that overpowered any defense my thin windows might lamely try to muffle it with, not to mention the work I was trying to concentrate on. I’d buy earplugs, but noise of this caliber is also a physical, pounding presence. I admit that this sensitivity is my own to deal with, but something makes me doubt I’m alone in New York; in certain neighborhoods, and often outside of a hospital, there are several signs posted along the street, “Unnecessary Noise Prohibited.” I wonder who defines the term unnecessary, and how. Other signs warn drivers that honking the car horn in certain areas can be punished with hefty fines. A couple of years ago the same magazine cited above ran a piece—I believe it was in the Talk of the Town section—covering a local activist working to ban loud car alarms. Since silent alarms are now readily available, and have proven more effective, there really is no need for these shrill alarms. My absolute favorite ones are those set off by the noise of a passing truck, just as one apartment-dweller might crank up the volume on the stereo to drown out a neighbor’s noise. Aural inflation runs rampant.

But the comment of the Muzak marketer wasn’t enough to get me to set fingers to keyboard; what finally did it was a day-hike I took in the hills of the upper Hudson valley on Easter Sunday. I almost thought twice about escaping the city on this holiday, since—no matter how agnostic, multicultural, or 24/7 this city might be—such days always bring a rare calm. For just a few precious hours we’re spared the sound of garbage trucks carrying our trash away from us while replacing it with a different sort of pollution, and spared many other noisy byproducts of our so-called progress. As I was walking through the woods, a wind kicked up, rustling the leaves packed down by winter snow, and I was reminded of just how loud the sound of wind through bare tree branches overhead can be. Most people would probably say that wind in trees is quieter, and less disturbing, than more urban sounds, but I was reminded yesterday that that isn’t always the case.

Manetsilence_1So I set out to briefly investigate silence—why some people can’t seem to find any, why so many do everything in their power rid themselves of it, and why many just don’t seem to give it any thought, unobtrusive as it is. It has played a major role in many religions, from the tower of silence of Persian Zoroastrianism to the Trappist monks’ vows of silence; one could speculate, in a cursory way, that the rise of secular culture was accompanied by a rise in volume. I came across a curious coincidence while checking out the etchings of Manet recently that would support such a conclusion. While the painter of Olympia has often been called the least religious of painters, an etching of his done around 1860 (in the print collection of the New York Public Library) portrays a monk, tablet or book in hand and finger held to lips, with the word Silentium scrawled below. Given the connotative relationship between silence and omission, obilivion, and death, Manet’s etching has interesting implications for both silence and religion as they were seen in nineteenth-century Paris. If not secularization, perhaps industrialization ratcheted everything up a few decibels.

Silence—of both good and bad sorts—runs through everything, leaving traces throughout many languages. There are silent films, which exist only thanks to a former lack of technology, and were usually accompanied by live music. Some people’s ideal mate is a classic man of the strong, silent type—adjectives never jointly applied to a woman. A silentiary is (well, was, since I doubt many people go into such a line of work nowadays) a confidant, counselor, or official who maintains silence and order. Cones of silence appear in politics, radar technology, nineteen-fifties and sixties television shows, and science fiction novels. After twenty years of creating marvelous music out of what could be derogatively deemed noise, the band Einstürzende Neubauten came out with both a song and album titled “Silence is Sexy.” Early on the band’s drummer, Andrew Chudy, adopted the name N. U. Unruh—a wild play on words that can be connected to a German expressionist poet and playwright, a piece of timekeeping equipment, and, aptly, a riff on the theme of disquiet or unrest.

LeopardiGetting back to my stroll in the woods, when considering the peace and quiet of a holiday I inevitably turn to poet Giacomo Leopardi’s songs in verse. His thirteenth canto (“La sera del dì di festa,” “The Evening of the Holiday”), laments the sad, weighty quietness left after a highly anticipated holiday. The falling into silence of a street song at the end is a death knell for the past festivities. In keeping with this, his twenty-fifth canto (“Il sabato del villaggio,” “Saturday Night in the Village”) praises Saturday’s energetic sounds of labor in preparation for the Sunday holiday, saving only melancholy words for the day of rest itself and its accompanying quiet. I don’t wish to summarize his rich and very specific work, so encourage you to have a look at it for yourself. The fact that these were written across an ocean and over a century ago attests to the fact that silence is not golden for everyone. Were he to live today, Leopardi might well be one of the iPod-equipped masses.

When I found that Leopardi’s opinion differed from my own, I looked to another trustworthy poet for a little support in favor of my own exasperation. Rainer Maria Rilke, in his famous fifth letter to the less famous young poet, written in the autumn of 1903, is evidently dependant on silence:

“… I don’t like to write letters while I am traveling, because for letter writing I need more than the most necessary tools: some silence and solitude and a not too familiar hour…. I am still living in the city… but in a few weeks I will move into a quiet, simple room, an old summerhouse, which lies lost deep in a large park, hidden from the city, from its noises and incidents. There I will live all winter and enjoy the great silence, from which I expect the gift of happy, work-filled hours….”

SenecapergamonmuseumTo break the tie set by Leopardi and Rilke, I turned to another old friend for comfort, and was surprised to find none. Seneca, in his fifty-sixth letter to Lucilius, asserts that it is the placation of one’s passions, not external silence, that gives true quiet:

“May I die if silence is as necessary as it would seem for concentration and study. Look, I am surrounded on every side by a beastly ruckus…. ‘You’re a man of steel, or you’re deaf,’ you will tell me, ‘if you don’t go crazy among so many different, dissonant noises…’. Everything outside of me might just as well be in an uproar, as long as there is no tumult within, and as long as desire and fear, greed and luxury don’t fight amongst themselves. The idea that the entire neighborhood be silent is useless if passions quake within us.”

In this letter he lists the noises that accompany him on a daily basis: the din of passing horse-drawn carriages, port sounds, industrial sounds (albeit those of the first century), neighborhood ball players, singing barbers, numerous shouting street vendors, and even people “who like to hear their own voices as they bathe.” It sounds as though he’s writing from the average non-luxury apartment of today’s cities. His point that what’s important is interior calm, not exterior quiet, exposed my foolishness.

À propos of Seneca and serenity, a friend of mine recently bought an iPod. A year ago we had a wonderful conversation where she offered up her usual, very insightful criticisms of North American culture: “What is wrong with this country? Everyone has a f****** iPod, but so few people have health insurance! Why doesn’t anyone rebel, or even seem to care?” As I walked up to meet her a couple of weeks ago I spotted from afar the trademark white wires running to each ear. “I love this thing. I mean, sure, I don’t think at all anymore, but it’s great!” To say that this brilliant woman doesn’t think anymore is crossing the line, but it’s the perfect hyperbole that nears the truth; if you can fill your ears with constant diversion, emptying the brain is indeed easier. The question, then, is what companies like Muzak and their clients can then proceed to fill our minds with if we’re subject to their sounds.

This relates to the ancient sense of otium as well—Seneca’s idea that creativity and thought need space, room, or an empty place and time in which to truly develop. Simply defining it as leisure time or idleness neglects its constructive nature. The idea that, when left at rest, the mind finds or creates inspiration for itself, and from that develops critical thought, is key to why I take issue with all this constructed, mass-marketed sound and “audio architecture.” While it might seem that an atmosphere filled with different stimuli and sounds would spark greater movement, both mental and physical, I think we’ve reached the point where that seeming activity is just that—an appearance, and one that sometimes hides a great void.

AlicecooperIn closing, for those interested, we may finally be able to give credit to the Muzak marketer who inspired me. On Tuesday, 18 April, John Schaefer will discuss Muzak on WNYC’s Soundcheck. In the meantime, I’ll leave you with a gem from the September 1969 issue of Poppin magazine. In music critic Mike Quigley’s interview with Alice Cooper, the latter discussed what he’s looking for between himself and the audience: “If it’s total freedom, I guess the ultimate thing you can go into is total silence between the audience and performer, with the performer projecting something he doesn’t even have to play. A total silence trip is the ultimate.” Even Muzak can’t counter that.

Selected Minor Works: Of the Proper Names of Peoples, Places, Fishes, &c.

Justin E. H. Smith

When I was an undergraduate in the early 1990s, an outraged student activist of Chinese descent announced to a reporter for the campus newspaper: “Look at me! Do I look ‘Oriental’? Do you see anything ‘Oriental’ about me?  No. I’m Asian.”  The problem, however, is that he didn’t look particularly ‘Asian’ either, in the sense that there is nothing about the sound one makes in uttering that word that would have some natural correspondence to the lad’s physiognomy.  Now I’m happy to call anyone whatever they want to be called, even if personally I prefer the suggestion of sunrises and sunsets in “Orient” and “Occident” to the arbitrary extension of an ancient (and Occidental) term for Anatolia all the way to the Sea of Japan.  But let us be honest: the 1990s were a dark period in the West to the extent that many who lived then were content to displace the blame for xenophobia from the beliefs of the xenophobes to the words the xenophobes happened to use.  Even Stalin saw that to purge bourgeois-sounding terms from Soviet language would be as wasteful as blowing up the railroad system built under the Tsar.

In some cases, of course, even an arbitrary sound may take on grim connotations in the course of history, and it can be a liberating thing to cast an old name off and start afresh.  I am certainly as happy as anyone to see former Dzerzhinsky Streets changed into Avenues of Liberty or Promenades of Multiparty Elections.  The project of pereimenovanie, or re-naming, was as important a cathartic in the collapsed Soviet Union as perestroika, or rebuilding, had been a few years earlier.  If the darkest period of political correctness is behind us, though, this is in part because most of us have realized that name-changes alone will not cut it, and that a real concern for social justice and equality that leaves the old bad names intact is preferable to a cosmetic alteration of language that allows entrenched injustice to go on as before– pereimenovanie without perestroika

But evidently the PC coffin could use a few more nails yet, for the naive theory of language that guided the demands of its vanguard continues to inform popular reasoning as to how we ought to go about calling things.  Often, it manifests itself in what might be called pereimenovanie from the outside, which turns Moslems into Muslims, Farsi into Persian, and Bombay into Mumbai, as a result of the mistaken belief on the part of the outsiders that they are thereby, somehow, getting it right.  This phenomenon, I want to say, involves not just misplaced moral sensitivity, but also a fundamental misunderstanding of how peoples and places come by their names. 

Let me pursue these and a few other examples in detail.  These days, you’ll be out on your ear at a conference of Western Sinologists if you say “Peking” instead of “Beijing.”  Yet every time I hear a Chinese person say the name of China’s capital city, to my ear it comes out sounding perfectly intermediate between these two.  Westerners have been struggling for centuries to come up with an adequate system of transliteration for Chinese, but there simply is no wholly verisimilar way to capture Chinese phonology in the Latin alphabet, an alphabet that was not devised with Chinese in mind, indeed that had no inkling of the work it would someday be asked to do all around the world.  As Atatürk showed with his Latinization of Turkish, and Stalin with his failed scheme for the Cyrillicization of the Baltic languages, alphabets are political as hell. But decrees from the US Library of Congress concerning transliteration of foreign alphabets are not of the same caliber as the forced adoption of the Latin or Cyrillic scripts.  Standardization of transliteration has more to do with practical questions of footnoting and cataloguing than with the politics of identity and recognition.

Another example.  In Arabic, the vowel between the “m” and the “s” in the word describing an adherent of Islam is a damma.  According to Al-Ani and Shammas’s Arabic Phonology and Script (Iman Publishing, 1999), the damma is “[a] high back rounded short vowel which is similar to the English “o” in the words “to” and “do”.” So then, “Moslem” or “Muslim”?  It seems Arabic itself gives us no answer to this question, and indeed the most authentic way to capture the spirit of the original would probably be to leave the vowel out altogether, since it is short and therefore, as is the convention of Arabic othography, unwritten.   

And another example.  Russians refer to Russia in two different ways: on the one hand, it is Rus’, which has the connotation of deep rootedness in history, glagolithic tablets and the like, and is often modified by the adjective “old”; on the other hand it is Rossiia, which has the connotation of empire and expanse, engulfing the hunter-gatherers of Kamchatka along with the Slavs at the empire’s core.  Greater Russia, as Solzhenitsyn never tires of telling us, consists in Russia proper, as well as Ukraine (the home of the original “Kievan Rus'”), and that now-independent country whose capital is Minsk.  Minsk’s dominion is called in German “Weissrussland,” and in Russian “Belorussiia.”  In other words, whether it is called “Belarus” or “Belorussia” what is meant is “White Russia,” taxonomically speaking a species of the genus “Russia.”  (Wikipedia tells us that the “-rus” in “Belarus” comes from “Ruthenia,” but what this leaves out is that “Ruth-” itself is a variation on “Rus’,” which, again is one of the names for Muscovite Russia as well as the local name for White Russia.) 

During the Soviet period, Americans happily called the place “Belorussia,” yet in the past fifteen years or so, the local variant, “Belarus,” has become de rigueur for anyone who might pretend to know about the region.  Of course, it is admirable to respect local naming practices, and symbolically preferring “Belarus” over “Belorussia” may seem a good way to show one’s pleasure at the nation’s newfound independence from Soviet domination. 

However (and here, mutatis mutandis, the same point goes for Mumbai), I have heard both Americans and Belarusans say the word “Belarus,” and I daresay that when Americans pronounce it, they are not saying the same word as the natives.  Rather, they are speaking English, just as they were when they used to say “Belorussia.”  Moreover, there are plenty of perfectly innocuous cases of inaccurate naming.  No one has demanded (not yet, anyway) that we start calling Egypt “Misr,” or Greece “Hellas.”  Yet this is what we would be obligated to do if we were to consistently employ the same logic that forces us to say “Belarus.”  Indeed, even the word we use to refer to the Germans is a borrowing from a former imperial occupier –namely, the Romans– and has nothing to do with the German’s own description of themselves as Deutsche.

In some cases, such as the recent demand that one say “Persian” instead of “Farsi,” we see an opposing tendency: rather than saying the word in some approximation of the local form, we are expected to say it in a wholly Anglicized way.  I have seen reasoned arguments from (polyglot and Western-educated) natives for the correctness and sensitivity of “Mumbai,” “Persian,” “Belarus,” and “Muslim,” but these all have struck me as rather ad hoc, and, as I’ve said, the reasoning for “Persian” was just the reverse of the reasoning for “Mumbai.”  In any case, monolingual Persian speakers and residents of Mumbai themselves could not care less. 

Perhaps the oddest example of false sensitivity of this sort comes not in connection with any modern ethnic group, but with a race of hominids that inhabited Europe prior to the arrival of the homo sapiens and were wiped out by the newcomers about 29,000 years ago.  In the 17th century, one Joachim Neumann adopted the Hellenized form of his last name, “Neander,” and proceeded to die in a valley that subsequently bore his name: the Neanderthal, or “the valley of the new man.”  A new man, of sorts, was found in that very valley two centuries later, to wit, the Homo neanderthalensis.

Now, as it so happens, “Thal” is the archaic version of the German word “Tal.”  Up until the very recent spelling reforms imposed at the federal level in Germany, vestigial “h”s from earlier days were tolerated in words, such as “Neanderthal,” that had an established record of use.  If the Schreibreform had been slightly more severe, we would have been forced to start writing “Göte” instead of the more familiar “Goethe.”  But Johann Wolfgang was a property the Bundesrepublik knew it dare not touch. The “h” in “Neanderthal” was however axed, but the spelling reform was conducted precisely to make German writing match up with German speech: there never was a “th” sound in German, as there is in English, and so the change from “Thal” to “Tal” makes no phonetic difference. 

We have many proper names in North America that retain the archaic spelling “Thal”, such as “Morgenthal” (valley of the morning), “Rosenthal” (valley of the roses), etc., and we happily pronounce the “th” in these words as we do our own English “thaw.”  Yet, somehow over the past ten years or so Americans have got it into their heads that they absolutely must say Neander-TAL, sans voiceless interdental fricative, as though this new standard of correctness had anything to do with knowledge of prehistoric European hominids, as though the Neanderthals themselves had a vested interest in the matter.  I’ve even been reproached myself, by a haughty, know-it-all twelve-year-old, no less, for refusing to drop the “th”. 

The Neanderthals, I should not have to point out, were illiterate, and the presence or absence of an “h” in the word for “valley” in a language that would not exist until several thousand years after their extinction was a matter of utter indifference to them.  Yet doesn’t the case of the Neanderthal serve as a vivid reductio ad absurdum of the naive belief that we can set things right with the Other if only we can get the name for them, in our own language, right?  The names foreigners use for any group of people (or prehuman hominids, for that matter) can only ever be a matter of indifference for that group itself, and it is nothing less than magical thinking to believe that if we just get the name right we can somehow tap into that group’s essence and refer to them not by some arbitrary string of phonemes, but as they really are in their deepest and truest essence. 

This magical thinking informs the scriptural tradition of thinking about animals, according to which the prelapsarian Adam named all the different biological kinds not with arbitrary sounds, but in keeping with their true natures.  Hence, the task of many European naturalists prior to the 18th century was to rediscover this uncorrupted knowledge of nature by recovering the lost language of Adam, and thus, oddly enough, zoology and Semitic philology cosnstituted two different domains of the same general project of inquiry. 

Some very insightful thinkers, such as Gottfried Leibniz, noticed that ancient Hebrew too, just like modern German, is riddled with corrupt verb forms and senseless exceptions to rules, and sharply inferred from this that Hebrew was no more divine than any vulgate.  Every vocabulary human beings have ever come up with to refer to the world around them has been nothing more than an arbitrary, exception-ridden, haphazard set of sounds, and in any case the way meanings are produced seems to have much more to do with syntax –the rules governing the order in which the sounds are put together– than with semantics– the correspondence between the sounds and the things in the world they are supposed to pick out. 

This hypercorrectness, then, is ultimately not just political, but metaphysical as well.  It betrays a belief in essences, and in the power of language to pick these out.  As John Dupré has compellingly argued, science educators often end up defending a supercilious sort of taxonomical correctness when they declaim that whales are not fish, in spite of the centuries of usage of the word “fish” to refer, among other things, to milk-producing fish such as whales.  The next thing you know, smart-ass 12-year-olds are lecturing their parents about the ignorance of those who think whales are fish, and another generation of blunt-minded realists begins its takeover.  Such realism betrays too much faith in the ability of authorities –whether marine biologists, or the oddly prissy postmodern language police in the English departments– to pick out essences by their true names.  It is doubtful that this faith ever did much to protect anyone’s feelings, while it is certain that it has done much to weaken our descriptive powers, and to take the joy out of language. 

Negotiations 7: Channeling Britney

(Note: Jane Renaud wrote a great piece on this subject last week. I hope the following can add to the conversation she initiated.)

When I first heard of Daniel Edwards’ Britney sculpture (Monument to Pro-Life), I was fascinated. What a rich stew: a pop star whose stock-in-trade has been to play the innocent/slut (with rather more emphasis on the latter) gets sculpted by a male artist as a pro-life icon and displayed in a Williamsburg gallery! Gimmicky, to be sure; nonetheless, the overlapping currents of Sensationalism, Irony and Politics were irresistible, so I took myself out to the Capla Kesting Fine Art Gallery on Thursday to have a look.

I am not a fan of pop culture. My attitude toward it might best be characterized a Swiss. In conversation, I tend to sniff at it. “Well,” I have been known to say, “it may be popular, but it’s not culture.” I do admit to a lingering fondness for Britney, but that has lees to do with her abilities as chanteuse than it does with the fact that, as a sixteen-year-old boy, I moved from the WASPy northeast to Nashville, Tennessee and found myself studying in a seraglio of golden-haired, pig-tailed, Catholic schoolgirls, each one of them a replica of early Britney and each one of them, like her, as common and as unattainable as a species of bird. What can I say? I was sixteen. Despise the sin, not the sinner.

I was curious to know the extent to which this sculpture would be a monument to pop culture—did the artist, Daniel Edwards, fancy himself the next Jeff Koons?—and surprised to discover that, having satisfied my puerile urges (a surreptitious glance at the breasts, a disguised study of the money shot), my experience of the piece was in no way mediated by my awareness that its model was a pop star. “Britney Spears” is not present in the piece, and its precursor is not Koons’ Michael Jackson and Bubbles or Warhol’s silk-screens of Marilyn Monroe. One has to go much further back than that. Its precursor is actually Michelangelo’s Pietá.

In both cases, the spectacular back story (Mary with dead Christ on her lap, Britney with Sean’s head in her cooch) is overwhelmed by the temporal event that grounds it; so that the Pietá is nothing more (nor less) than Mother and Dead Son, and Monument to Pro-Life becomes simply Woman Giving Birth. Where Koons and Warhol empty the role of the artist as creative genius and replace it with artist as mirror to consumer society, Edwards (and Michelangelo well before him) empty the divine (the divinity of Christ, the divinity of the star) and replace it with the human. Edwards, then, is doing something very tricky here, and if one can stomach the nausea-inducing gimmickry of the work, there’s a lot worth considering.

First of all is the composition of the work. The subject is on all fours, in a position that, as Jane Renaud wryly observed in these pages last week, might be more appropriate for getting pregnant than for giving birth. She is on a bear-skin rug; her eyes are heavily lidded, her lips slightly parted, as though she might be about to moan or to sing. And yet the sculpture is in no way pornographic or even titillating. There is nothing on her face to suggest either pain or ecstasy. The person seems to be elsewhere, even if her body is present, and the agony we associate with childbirth is elsewhere. In fact, with her fingers laid gently into the ears of the bear, not clutching or tearing at them, she seems to be channeling all her emotions into its head. Its eyes are wide open, its mouth agape and roaring. The subject is emptying herself, channeling at both ends, serenely so, a Buddha giving birth, without tension at the front end and without blood or tearing at the rear. The child’s head emerges as cleanly, and as improbably, as a perfect sphere from a perfect diamond. This is a revolution in birthing. Is that the reward for being pro-life? Which brings us to the conceptual component of Monument to Pro-Life.

To one side of the sculpture stands a display of pro-life literature. You cannot touch it; you cannot pick it up; you cannot read it even if you wanted to because it is in a case, under glass. This is not, I think, because there is not enough pro-life literature to go around, and it hints at the possibility that the artist is being deliberately disingenuous, that he is commenting both on the pro-life movement and on its monumental aspirations. The sculpture is out there in the air, naked and exposed, while the precious literature is encased and protected. Shouldn’t it be the other way around? It’s almost as if the artist is saying, “This is the pro-life movement’s relationship to women: It is self-interested and self-preserving; and in its glassed-in, easy righteousness it turns them into nothing more than vessels, emptying machines. It prefers monuments to mothers, literature to life.”

Now lest you think that I am calling Daniel Edwards the next Michelangelo, let me assure you that I most definitely am not. As conceptually compelling as I found Monument to Pro-Life to be, I also found it aesthetically repugnant. Opinions are like assholes—everybody has one—but this sculpture is hideous to look at. It’s made of fiberglass, for god’s sake, which gives it a reddish, resiny cast, as though the subject had been poached, and a texture which made me feel, just by looking at it, that I had splinters under my fingernails. I know we all live in a post-Danto age of art criticism, that ideas are everything now, and that the only criterion for judging a work of art is its success in embodying its own ideas; but as I left the gallery I couldn’t help thinking of Plato and Diogenes. When Plato defined man as a “featherless biped,” the Cynic philosopher is said to have flung a plucked chicken into the classroom, crying “Here is Plato’s man.” Well, here is Danto’s art. With a price tag of $70,000, which it will surely fetch, he can have it.

Monday Musing: The Palm Pilot and the Human Brain, Part II

Part II: How Brains Might Work

Chess7Two weeks ago I wrote the first part of this column in which I made an attempt to explain how it is that we are able to design very complex machines like computers: we do it by employing a hierarchy of concepts, each layer of which builds upon the layer below it, ultimately allowing computers to perform seemingly miraculous tasks like beating Gary Kasparov at chess at the highest levels of the hierarchy, while all the way down at the lowest layers, the only thing going on is that some electrons are moving about on a tiny wafer of silicon according to simple physical rules. [Photo shows Kasparov in Game 2 of the match.] I also tried to explain what gives computers their programmable flexibility. (Did you know, for example, that Deep Blue, the computer which drove Kasparov to hair-pulling frustration and humiliation in chess, now takes reservations for United Airlines?)

But while there is a difference between understanding something that we ourselves have built (we know what the conceptual layers are because we designed them, one at a time, after all) and trying to understand something like the human brain, designed not by humans but by natural selection, there is also a similarity: brains also do seemingly miraculous things, like the writing of symphonies and sonnets, at the highest levels, while near the bottom we just have a bunch of neurons connected together, digitally firing (action potentials) away, again, according to fairly simple physical rules. (Neuron firings are digital because they either fire or they don’t–like a 0 or a 1–there is no such thing as half of a firing or a quarter of one.) And like computers, brains are also very flexible at the highest levels: though they were not designed by natural selection specifically to do so, they can learn to do long-division, drive cars, read the National Enquirer, write cookbooks, and even build and operate computers, in addition to a million other things. They can even turn “you” off, as if you were a battery operated toy, if they feel they are not getting enough oxygen, thereby making you collapse to the ground so that gravity can help feed them more of the oxygen-rich blood that they crave (you know this well, if you have ever fainted).

Jeff_hawkinsTo understand how brains do all this, this time we must attempt to impose a conceptual framework on them from the outside, as it were; a kind of reverse-engineering. This is what neuroscience attempts to do, and as I promised last time, today I would like to present a recent and interesting attempt to construct just such a scaffolding of theory on which we might stand while trying to peer inside the brain. This particular model of how the brain works is due to Jeff Hawkins, the inventor of the Palm Pilot and the Treo Smartphone, and a well-respected neuroscientist. It was presented by him in detail in his excellent book On Intelligence, which I highly recommend. What follows here is really just a very simplified account of the book.

Let’s jump right into it then: Hawkins calls his model the “Memory-Prediction” framework, and its core idea is summed up by him in the following four sentences:

The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. (On Intelligence, p. 6)

Hawkins focuses mainly on the neocortex, which is the part of the brain responsible for most higher level functions such as vision, hearing, mathematics, music, and language. The neocortex is so densely packed with neurons, that no one is exactly sure how many there are, though some neuroscientists estimate the number at about thirty billion. What is astonishing is to realize that:

Those thirty billions cells are you. They contain almost all your memories, knowledge, skills, and accumulated life experience… The warmth of a summer day and the dreams we have for a better world are somehow the creation of these cells… There is nothing else, no magic, no special sauce, only neurons and a dance of information… We need to understand what these thirty billion cells do and how they do it. Fortunately, the cortex is not just an amorphous blob of cells. We can take a deeper look at its structure for ideas about how it gives rise to the human mind. (Ibid., p. 43)

The neocortex is a thin sheet consisting of six layers which envelops the rest of the brain and is folded up in a crumpled way. This is what gives the brain its walnutty appearance. (If completely unfolded, it would be quite thin–only a couple of millimeters–and would cover an area about the size of a large dinner napkin.) Now, while the neocortex looks pretty much the same everywhere with its six layers, different regions of it are functionally specialized. For example, the Broca’s area handles the rules of linguistic grammar. Other areas of the neocortex have also been mapped out functionally in quite some detail by techniques such as looking at brains with localized damage (due to stroke or injury) and seeing what functions are lost in the patient. (Antonio Damasio presents many fascinating cases in his groundbreaking book Descartes’ Error.) But while everyone else was looking for differences in the various functional areas of the cortex, a very interesting observation was made by a neurophysiologist named Vernon Mountcastle (I was fortunate enough to attend a brilliant series of lectures by him on basic physiology while I was an undergraduate!) at Johns Hopkins University in 1978: he noticed that all the different regions of the neocortex look pretty much exactly the same, and have the same structure, whether they process language or handle touch. And he proposed that since they have the same structure, maybe they are all performing the same basic operation, and that maybe the neocortex uses the same computational tool to do everything. Mountcastle suggested that the only difference in the various areas are how they are connected to each other and to other parts of the nervous system. Now Hawkins says:

Scientists and engineers have for the most part been ignorant of, or have chosen to ignore, Mountcastle’s proposal. When they try to understand vision or make a computer that can “see,” they devise vocabulary and techniques specific to vision. They talk about edges, textures, and three-dimensional representations. If they want to understand spoken language, they build algorithms based on rules of grammar, syntax, and semantics. But if Mountcastle is correct, these approaches are not how the brain solves these problems, and are therefore likely to fail. If Mountcastle is correct, the algorithm of the cortex must be expressed independently of any particular function or sense. The brain uses the same process to see as to hear. The cortex does something universal that can be applied to any type of sensory or motor system. (Ibid., p. 51)

The rest of Hawkins’s project now becomes laying out in detail what this universal algorithm of the cortex is, how it functions in different functional areas, and how the brain implements it. First he tells us that the inputs to various areas of the brain are essentially similar and consist basically of spatial and temporal patterns. For example, the visual cortex receives a bundle of inputs from the optic nerve, which is connected to the retina in your eye. These inputs in raw form represent the image that is being projected onto the retina in terms of a spatial pattern of light frequencies and amplitudes, and how this image (pattern) is changing over time. Similarly the auditory nerves carry input from the ear in terms of a spatial pattern of sound frequencies and amplitudes which also varies with time, to the auditory areas of the cortex. The main point is that in the brain, input from different senses is treated the same way: as a spatio-temporal pattern. And it is upon these patterns that the cortical algorithm goes to work. This is why spoken and written language are perceived in a remarkably similar way, even though they are presented to us completely differently in simple sensory terms. (You almost hear the words “simple sensory terms” as you read them, don’t you?)

Now we get to one of Hawkins’s key ideas: unlike a computer (whether sequential or parallel), the brain does not compute solutions to problems; it retrieves them from memory: “The entire cortex is a memory system. It isn’t a computer at all.” (Ibid., p. 68) To illustrate what he means by this, Hawkins provides an example: imagine, he says, catching a ball thrown at you. If a computer were to try to do this, it would attempt to estimate its initial trajectory and speed and then use some equations to calculate its path, how long it will take to reach you, etc. This is not anything like what your brain does. So how does your brain do it?

When a ball is thrown, three things happen. First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted as it is recalled to accomodate the particulars of the moment, such as the ball’s actual path and the position of your body. The memory of how to catch a ball was not programmed into your brain; it was learned over years of repetitive practice, and it is stored, not calculated, in your neurons. (Ibid., p. 69)

At first blush it may seem that Hawkins is getting away with some kind of sleight of hand here. What does he mean that the memories are just retrieved and adjusted for the particulars of the situation? Wouldn’t that mean that you would need millions of memories for every single scenario like catching a ball, because every situation of ball-catching can vary from another in a million little ways? Well, no. Hawkins now introduces a way of getting around this problem, and it is called invariant representation, which we will get to soon. Cortical memories are different from computer memory in four ways, Hawkins tells us:

  1. The neocortex stores sequences of patterns.
  2. The neocortex recalls patterns auto-associatively.
  3. The neocortex stores patterns in an invariant form.
  4. The neocortex stores patterns in a hierarchy.

Let’s go through these one at a time. The first feature is why when you are telling a story about something that happened to you, you must go in sequence (and why often people include boring details in their stories!) or you may not remember what happened; like only being able to remember a song if you sing it to yourself in sequence, one note at a time. (You couldn’t recite the notes backward–or even the alphabet backward very fast–while a computer could.) Even very low-level sensory memories work this way: the feel of velvet as you run your hand over it is just the pattern of very quick sequential nerve firings that occurs as your fingers run over the fibers. This pattern is a different sequence in case you are running your hand over gravel, say, and that is how you recognize it. Computers can be made to store memories sequentially, such as a song, but they do not do this automatically, the way the cortex does.

Auto-associativity is the second feature of cortical memory and what it means is that patterns are associated with themselves. This makes it possible to retrieve a whole pattern when only a part of it is presented to the system.

…imagine you see a person waiting for a bus but can only see part of her because she is standing partially behind a bush. Your brain is not confused. Your eyes only see parts of a body, but your brain fills in the rest, creating a perception of a whole person that’s so strong you may not even realize you’re only inferring. (Ibid., p. 74)

Temporal patterns are also similarly retrieved and completed. In a noisy environment we often don’t hear every single word that someone is saying to us, but our brain fills in with what it expects to have heard. (If Robin calls me on Sunday night on his terrible cell phone and says, “Did you …crackle-pop… your Monday column yet?” My brain will automatically fill in the word “write.”) Sequences of memory patterns recalled auto-associatively essentially constitute thought.

Now we get to invariant representations, the third feature of cortical memory. Notice that while computer memories are designed for 100% fidelity (every bit of every byte is reproduced flawlessly), our brains do not store information this way. Instead, they abstract out important relationships in the world and store those, leaving out most of the details. Imagine talking to a friend who is sitting right in front of you. As you talk to her, the exact pattern of pixels coming over the optic nerve from your retina to your visual cortex is never the same from one moment to another. In fact, if you sat there for hours, no pattern would ever repeat because both of you are moving slightly, the light is changing, etc. Nevertheless you have a continuous sense of your friend’s face being in front of you. How does that happen? Because your brain’s internal pattern of representation of your friend’s face does not change, even though the raw sensory information coming in over the optic nerve is always changing. That’s invariant representation. And it is implemented in the brain using a hierarchy of processing. Just to give a taste of what that means, every time your friend’s face or your eyes move, a new pattern comes over the optic nerve. In the visual input area of your cortex, called V1, the pattern of activity is also different each time anything in your visual field moves, but several levels up in the hierarchy of the visual system, in your facial recognition area, there are neurons which remain active as long as your friend’s face is in your visual field, at any angle, in any light, and no matter what makeup she’s wearing. And this type of invariant representation is not limited to the visual system but is a property of every sensory and cortical system. So how is this invariant representation accomplished?

———————–

I’m sorry, but unfortunately, I have once again run out of time and space and must continue this column next time. Despite my attempts at presenting Hawkins’s theory as concisely as possible, it is not possible to condense it further without losing essential parts of it and there’s still quite a bit left, and so I must (reluctantly) write a Part III to this column in which I will present Hawkins’s account of how invariant representations are implemented, how memories are used to make predictions (the essence of intelligence), and how all this is implemented in hierarchical layers in the actual cortex of the brain. Look for it on May 8th. Happy Monday, and have a good week!

NOTE: Part III is here. My other Monday Musing columns can be found here.

Monday, April 10, 2006

Old Bev: POP! Culture

53063_2The cover of this week’s STAR Magazine features photos of Katie Holmes, Gwyneth Paltrow, Brooke Shields, Angelina Jolie, and Gwen Stefani (all heavily pregnant) and the yellow headline “Ready to POP!”  Each pregnancy, according to Star, is in some way catastrophic – Katie’s dreading her silent Scientology birth, Gwyneth drank a beer the other night, Brooke fears suffering a second bout of depression, Angelina’s daring to dump her partner, and Gwen’s thinking of leaving show business.  They seem infected, confused, in danger of combustion.  “I can’t believe they’re all pregnant all at the same time!” exclaimed the cashier at Walgreen’s as she rung up my purchases, as if these women were actually in the same family, or linked by something other than fame and success.  The cover of Star suggests that these ladies have literally swollen too big for their own good.

Edwards_1Britney Spears’ pregnancy last summer kicked off this particular craze of the celebrity glossy.  Each move she made, potato chip she ate, insult tossed toward Kevin, all of it was front page pregnancy news for Star and its competitors.  “TWINS?!” screamed one cover, referencing her ballooning weight. It was coverage like this that inspired Daniel Edwards’ latest sculpture, “Monument to Pro-Life: The Birth of Sean Preston,” though from his perspective the media’s take on the pregnancy was unilaterally positive.  When asked why it was Britney Spears whom he chose to depict giving birth naked and on all fours on a bear skin rug, he replied, “It had to be Britney.  She was the one.  I’d never seen such a celebrated pregnancy…and I wanted to explore why the public was so interested.”

Predictably, the sculpture has attracted a fair amount of coverage in the last few weeks, most of it in the “news of the weird” category. The owners of the Capla Kesting Fine Art Gallery have made much of the title of the piece, taking the opportunity to include in the exhibit a collection of Pro-Life materials, announcing plans for tight security at the opening,  and publicizing their goal of finding an appropriate permanent display for the work by Mother’s Day.  Edwards states that he’s undecided on the abortion issue, Britney has yet to comment on the work, and the Pro-Lifers aren’t exactly welcoming the statue into their canon.  For all of the media flap, I was expecting more of a crowd at Friday’s opening (we numbered only about 30 when the exhibit opened), and a much less compelling sculpture.

Front_3My initial reaction to photos of “Monument to Pro-Life” was that Britney’s in a position that most would sooner associate with getting pregnant than with giving birth.  Edwards, I thought, was invoking the pro-life movement as a way to protest the divorce of the sex act from reproduction. But in person, in three dimensions and life-size, the sculpture demands that the trite interpretations be dropped.  It’s a curious and exploratory work, and I urge you to go and see it if you can, rather than depend on the photos.  Unlike the pregnant women of STAR, the woman in “Monument to Pro-Life” isn’t in crisis.  She easily dominated the Capla-Kesting gallery (really a garage), and made silly the hoaky blue “It’s a Boy!” balloons hovering around the ceiling.  To photograph the case of pro-life materials in the corner I had to ask about five people to move – they were standing with their backs to it, staring at the sculpture.  The case’s connection to the work was flimsy, sloppy, more meaningful in print than in person.

Yes, Edwards called the piece “Monument to Pro-Life: The Birth of Sean Preston,” but I think the title aims less to signal a political allegiance than to explore the rhetoric of the abortion debate.  Birth isn’t among the usual images associated with the pro-life movement. Teeny babies, smiling children, bloody fetuses are usual, but I’ve never seen a birth depicted on the side of a van.  Pro-life propaganda is meant to emphasize the life in jeopardy – put a smiling toddler on a pro-life poster, and you’re saying to the viewer, you would kill this girl?  The bloody fetus screams, you killed this girl.  The images are meant to locate personal responsibility in the viewer.  But a birth image involves a mother, allows a displacement of that responsibility.  A birth image invokes contexts outside of the viewer’s frame of reference (but maybe she was raped! Maybe she already has four kids and no job!  Maybe she’s thirteen!), and forces the viewer to pass judgment on the mother in question.  Not all pro-lifers, not by any means, wish to punish or humiliate those women who abort their pregnancies. The preemies and toddlers and fetuses serve to inspire a protection impulse, and the more isolated those figures are from their mothers (who demand protection), the simpler the argument. Standard pro-life propaganda avoids birth images in order to isolate that protective impulse, and narrow the guilt.

Of course, the mother in this birth image has a prescribed context.  Britney Spears, according to Edwards, has made the unusual and brave choice to start a family at the height of her career, at the young age of 24.  For him, the recontextualization of “Pro-Life” seems to be not just about childbirth, but about childbirth’s relationship to ‘anti-family’ concepts of female career.  Edwards celebrates the birth of Sean Preston because of when Sean Preston was born, and to whom.  Unlike STAR, which depicts the pregnancies of successful women as dangerous grabs for more, Edwards depicts Britney’s pregnancy as a venerable retreat back to womanhood.  The image/argument would be more convincing, however, if the sculpture looked more like Britney, and if Britney was a better representative of the 24-year-old career woman. It doesn’t (the photos don’t conceal an in-person resemblance), and she isn’t (already the woman has released a greatest hits album).  Edwards would have been better served had Capla Kesting displayed a case of Britney iconography along side the statue if he wished his audience to contemplate her decision.  But the sculpture is perfectly compelling even outside of the Britney context.

BackStandard pro-life rhetoric is preoccupied by transition, the magic moment of conception when ‘life begins.’  Edwards too focuses on transition, but at the other end of the pregnancy.  Sean Preston, qualified as male only by the title, is frozen just as he crowns.  He has yet to open his eyes to the world, but the viewer, unlike his mother, can see him. Many midwives and caregivers discourage childbirth in this position (hands and knees) because, though it is easy on the mother’s back and protects against perineal tearing, it is difficult to anticipate the baby’s arrival.  It’s a method of delivery that a mother should not attempt alone. The viewer of “Monument to Pro-Life” is necessarily implicated in the birth, assigned responsibility for the safe delivery of Sean Preston.

You’ve got to be up close to see this, though.  As I left the gallery, walked up North 5th to Roebling, a 60-something woman in a chic black coat stopped me.  “Who’s the artist?” she asked.  “Who is it that’s getting all the attention?”  I told her it was Daniel Edwards, but that the news trucks were there because it was a sculpture of Britney Spears giving birth on all fours.  Her eyebrows raised.  “You know, I thought it was very pornographic,” she offered, and I glanced back at Capla Kesting.  And from across the street, it did look like a sex show.

It’s a tricky game Daniel Edwards is playing.  On the one hand, “Monument to Pro-Life” is a fairly complicated (and exploitive) work; on the other, it’s a fairly boring (and exploitive) conduit of interest cultivated by STAR and the pro-life movement.  Unfortunately for Edwards, the media machine that inspired his work doesn’t quite convey it in full – the AP photograph of the sculpture doesn’t show her raised hips, and forget about Sean Preston crowning. However, the STAR website does have a mention of the sculpture, and a poll beneath the article for readers to express their opinions.  The questions: “Is it a smart thing for pregnant-again Britney Spears, who gave birth to son Sean Preston just 6 months ago, to have another child so soon after giving birth?” and “Can Britney make a successful comeback as a singer?”

Philip Larkin: Hull-Haven

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Philiplarkin200x280_1For Gerard Manley Hopkins there was Heaven-haven, when a nun takes the veil, and perhaps a poet-priest seeks refuge, but for Philip Larkin there is no heaven. There is Hull, and that is where Larkin, largely free of metropolitan London’s seductions, finds his poetry and his poetics. Old chum Kingsley, it seems, can do his living for him there. But Larkin has more than two strings to his bow too, which awkward last meetings around the death bed show only too plainly.

Now that the usual attempts at deconstruction have almost run their course, the time has come to look at the work left. Pulling people off their plinth is a lifetime task for some who never get around to understanding that some writers say more, and more memorably, than they can ever do. Also, they don’t seem to understand that writers are just like everyone else, only with the inexplicable gift, which the said writer understands least of all, knowing that the gift, bestowed by the Muse, can depart in high dudgeon without notice. Larkin knew this, and lamented the silences of his later years.

Silence does seem to wait through his poems. They bleakly open to morning light, discover the world’s apparent heartlessness, then close with a dying fall. Occasionally ‘long lion days’ blaze, but the usual note is meditative, and sometimes grubby. What mum and dad do to you has to be lived out in extenso. Diary entries are too terrible to be seen and must be shredded. Bonfires and shreddings have a noble tradition in the history of literature. What would we have done if we had Byron’s memoirs and we were Murray and the fireplace waited?

Strange harmonies of contrasts are the usual thing in art. So if Larkin proclaimed racist sentiments in letters yet spent a lifetime in awe of jazz greats, or ran an effective university library whilst thinking ‘Beyond all this, the wish to be alone’ (‘Wants’), that is the doubleness we are all prone to. For artists there always seems to be the finger pointing, whereby perfection is expected of the artist but never required by the critic. Larkin is seen as squalid, not modern, provincial, by some. For others there are no problems. He says what they feel, and says it plainly.

If Larkin doesn’t have a mind like Emily Dickinson’s—who does—or scorns the Europeans, these are not, in themselves, things that limit the reach of his poetic. Larkin’s modest Collected Poems stands in distinct contrast to silverfish-squashing tomes groaning with overwriting. Larkin is a little like Roethke in that way. Every poem is precise, musical, clear. How infuriating it is that people do not follow artists’ wishes and publish poems never meant to see the light of day. There is a great virtue in Larkin’s kind of selectivity. Capitalism seems to require overproduction of product, and many poets have been happy to oblige. But this surfeit does the poet no long-term favours and usually ensures a partial, or total, oblivion. Tennyson and Wordsworth are great poets who clearly have survived oblivion, but who now reads through all of ‘Idylls of the King’ or ‘The Prelude’.

Larkin’s version of pastoral has its rubbish and cancer, sometimes its beautiful, clear light, its faltering perception of bliss, usually in others. Doubts about the whole poetic project surface occasionally, and what poet doesn’t empathise with that. How easy jazz improvisation seems in comparison to getting poems out and about. No doubt the improvisation comes only after mastery, control. Then comes the apparently spontaneous letting go. But the poet doesn’t see that. He/she is left with the rough bagging of words to get the music through. Larkin’s music is sedate, in the minor key. Wonder amongst daffodils or joy amongst skylarks are pleasures that always seem just over the hill, or flowing round a bend in the Humber as one gets to the embankment. Street lights seem like talismans of death, obelisks marking out seconds, hours, days, years, eternity. Work is a toad crushing you.

A great poet? The comparison with Hopkins is instructive. Hopkins makes us feel the beauty of nature, he makes us confront God’s apparent absence in the dark, or “terrible”, sonnets. It is committed writing in the best sense The language heaves into dense music, sometimes too dense, but you always feel engaged by his best poetry. Larkin is dubious about the whole life show. The world is seen from behind glass, whiskey to hand, or in empty churches, or from windswept plains, sediment, frost or fog lapping at footfall. Hopkins loves his poplar trees; his kingfishers catch fire; weeds shoot long and lovely and lush. Grief and joy bring the great moments of insight and expression, and thus the memorability.

The case of Larkin does raise a fundamental concern regarding art and its place in society. When the upward trudge of aesthetic idealism meets the downward avalanche of political and social reality, what is the aesthetic and political fallout. With Larkin it appears to be a stoic acceptance of status quo nihilism—waiting for the doctor, then oblivion. With Celan, one cannot get further than the Holocaust. For others, a crow is an image of violence, or tulips are weighted with lead. No longer are these images of natural beauty. No doubt, for those who have just seen a contemporary exhibition at Gagosian or been reading about the latest horrors in Darfur, Larkin could seem hopelessly out of touch, and self-pitying to boot. That is not a sensible way of looking at culture. Looking for political correctness in art always leads to disappointment.

Larkin seems to fill the expectations required by late-twentieth century English aesthetics, but I wonder. When younger, I thought Stravinsky the greatest composer of the century I was born into. Now it is Rachmaninov and Prokofiev who give me more pleasure. And I find them no less ‘great’. Robert Lowell seemed the representative poet of his generation when I was at university. Now some of the work reads to me like a bad lithium trip. Does this signify cultural sclerosis on my part? We can’t have a bar of Wagner’s anti-Semitism, but that still leaves the fact of Wagner’s greatness to be confronted. The achievement is so enormous. To use a somewhat dangerous and controversial term of the moment, it shows more than intelligent design. Appeals to the Zeitgeist, a somewhat unreliable indicator of artistic excellence, are last resorts for those who like to give their critiques an apparently incontrovertible seal of approval. In the interim, culture remains dynamic and reputations sink or swim depending on factors having very little to do with intrinsic value.

In Hull Larkin found his haven, the world held warily at bay. However, the world cannot be held at bay for long. The general public want their pound of flesh, and they will take it. Hopkins’ divided soul has passed through mercy, and mercilessness, to a Parnassian plateau. Larkin has entered upon his interregnum, where an uncertain reckoning now takes shape.

The following is the first part of a two-part poem, ‘Larkin Land’, written in 1993.

P03p37           Larkin Letters 

Perhaps this sifted life is right—
The best of him was poetry
Bearing acid vowels
In catalogued soliloquy,

Where art’s unspent revisions
Would liberate, restore;
Trapped in a bone enigma
Ideals could still creep through.

A fifty-dollar lettered life
Can’t give you all the facts.
When one has got a poem just right
Awkward prose seems second-best.

Judgment is mute
When words come from pain—
Beside fierce Glenlivet
These civilised spines

Stare past the face
Of a thousand-year spite;
Annexed by form,
Poems survive the killing night.

So, at end, the cost of verse
Is paid for with this strife;—
Though not asked for, given,
This England mirrored into life.

Written 1993

Monday Musing: Al Andalus and Hapsburg Austria

One probably apocryphal story of the Alhambra tells of how Emir Al Hamar of Gharnatah (Granada) decides to begin the undertaking. One night in the early 13th century, Al Hamar has a dream that the Muslims would be forced to leave Spain. He takes the dream to be prophetic and, more importantly, to be the will of God. But he decides that if the Muslims are to leave Spain, then they would leave a testament to their presence in Spain, to al Andalus. So Al Hamar begins the project (finished by his descendants) that would result in one of the world’s most beautiful palaces, the Alhambra. Muslim Spain was still in its Golden Age at this point, but also just two and a half centuries before the expulsion/reconquest. The peak of the Golden Age had probably passed, with its most commonly suggested moment coinciding with the life of the philosopher ibn Rushd, or Averroes (1126-1198 C.E.).

300pxgranada99towardsalhambra

Muslim Spain plays an interesting role in different contemporary political imaginations. For Muslim reformers, it is an image of a progressive, forward looking and tolerant period in Islam, where thinkers such as ibn Rushd could assert the primacy of reason over revelation. For radical Islamists, it’s a symbol of Islam at the peak of its geopolitical power. For conservatives in the West it is a chapter in an off-again, on-again clash of civilizations. For Western progressives, it is an image of a noble, pre-modern multiculturalism tolerant of Christians and Jews. That is, for the contemporary imagination, it has become the political equivalent of a Rorschach.

250pxboabdilferdinandisabella

I see no reason why I should be different in my treatment of Al Andalus (In all honesty, I react fairly badly, I cringe, when people speak of past cultures and civilizations as idyllic, free of conflict, and held together by honor, duty, and understanding. The only thing I’ve ever been nostalgic for is futurism.) Morgan’s post last Monday on Joseph Roth reminded me of Andalusian Spain, of all things.

The Hapsburg Empire is the other Rorschach for the imagination of political history. The Austro-Hungarian Empire carries far less baggage from their involvement with the present than Andalusia does, but it certainly suffered its fair share. The break up of the Soviet Empire and the unleashing of “pent up” or “frustrated” national aspiration had many looking to the Hapsburgs as a model of a noble, pre-modern multiculturalism.

My projection onto these inkblots of history is something altogether different. In the changing borders and bibliographies of Andalusian and Austrian history, I see societies that reach a cultural and intellectual peak as (or is it because?) they are overcome with panic about the end of their world. A “merry” or “gay apocalypse”, is how Hermann Broch, the author of the not so merry but apocalyptic Death of Virgil, described the period. This sentiment echoes not just in literature but even in a book as systematic as Karl Polyani’s The Great Transformation. Somehow it’s clear, Karl Kraus’ Grumbler, the pessimistic commentator who watches the world go mad and then be annihilated by the cosmos as punishment for the world war in The Last Days of Mankind, was lying in wait long before the catastrophe, that is, during the Golden Age itself.

The early 13th century was hardly a trough for the Moors in Spain, just as the period before World War I was not a cultural malaise for the Austrians, or the rest of Europe for that matter. Quite the contrary. If there is an image that these societies evoke, it is feverish activity, even if it’s not the image that, say, comes across in Robert Musil’s endless description of the society, The Man Without Qualities. Broch would write himself to death in some bizarre twist on Scheherazade.

180pxkarl_kraus_1914

The inscriptions on the Alhambra, such as “Wa la ghalib illa Allah” (“There is no conqueror but God”), are written in soft stone. They have to be replaced, and thereby they require the engagement of the civilization that is to succeed the Moors. Quite an act of faith. While it may be the case that some such as Kraus (or Stefan Zweig) expected the end of all civilization, Austrian thought and writing of the era show a similar faith despite the Anschluss. Admittedly, you have to really look for it. And it certainly did export some of the better minds of the time—including Broch, Polyani, Karl Popper, and Friedrich von Hayek, albeit for reasons of horror and that are to its shame.

It is harder to know what to make of these civilizations, for which an awareness or expectation of their end spurs many of their greatest achievements. There aren’t too many of them. They have in common the fact that they are remembered for relative tolerance, but that could just be a prerequisite to flourish in the first place. Their appeal is, however, clear—as close to an image a society can have of creating, thinking and engaging, even through despair, some way to survive the apocalypse.

Happy Monday.