Lunar Refractions: Longing for Perfect Porn Aristocrats and Other Delights

Leonard Cohen’s music first came to me in my early teens. I fell deeply in love, and thought, this will pass, this is an adolescent thing, a phase, an infatuation; time or luck will have me grow out of this.

01_natural_born_killers_front His words came to me—as many great things come to me, in pathetic or even hideous masks, to test whether or not I am easily fooled by the disguises woven to hide their wondrous nature—in Oliver Stone’s 1994 movie Natural Born Killers. It is a rather dismissible movie, though the soundtrack is amazing (thank you, Trent Reznor et al.), and it did its job of delivering the unexpected, unforeseeable goods.

But by then I already, albeit unwittingly, knew these tw02_livesong_2o introductory songs, “The Future” and “Waiting for the Miracle,” from one of my friend Vanessa’s mixed tapes. I just didn’t know who was behind the suave voice. A few years and several album acquisitions later, an acquaintance in Rome asked me what I was listening to at the moment, and it was Cohen’s 1973 album Live Songs. The response so impressed me that I bring it to you verbatim: “Leonard Cohen? Nobody listens to him anymore. We were all listening to him in the late seventies, when we were young and radical and left.” Yeah, I left. I’m fine being told that my tastes are quite yesterday, and I knew this guy probably didn’t get it because he was, well, who he was. He was also definitely one of the numerous Europeans who helped make Cohen more popular over there than in North America by not understanding his lyrics.

Well, to echo the rampant name-calling that follows him everywhere, the Ladies’ Man, the Grocer of Despair, grandson of the Prince of Grammarians, has just published a new old book, titled Book of Longing, and was on the radio three weeks ago chatting with Terry Gross. She did a fairly good job, considering that the usual sort of questions, many of which she tried, really didn’t fit here, and Cohen seems to be no comforter.

03_006112558xLie to me, Leonard

Firstly, he lied his way through the entire hour. Okay, perhaps they weren’t all lies, and the ones that were lies were committed with some definite * intentions (*I’m at a loss for the appropriate adjective: honest? Low? Lofty? Sick? Sweet? Romantic? All of the above?). The truth is that he can’t help how charming he is, and frankly it’s a miracle he’s done what he has to melt deeply frozen hearts. He had tea on April 21 with his Zen master in celebration of the latter’s ninety-ninth birthday, but immediately backtracks to point out that it wasn’t tea—it was liquor. In his poem “Titles” he reads that “I hated everyone / but I acted generously / and no one found me out.” He valiantly assures the listener that this is true, and equally valiantly contradicts it in song and in print. Plus, I can’t help but suspect that many people have found him out. Is it possible to feign this man’s passion? Probably, but I just don’t want to think so.

Alright, that’s not so many lies. But a lot of interesting things came up. When discussing04a_150pxcandleburning the idea of composing a poem versus composing a song, Gross asks him about the early sixties song “Famous Blue Raincoat” and which of those two it originally was, to which he replies, “It’s all the same to me.” [Aside: forgive me for sticking to the script here and bringing up the blockbuster songs, when I’d rather fawn over the lesser-known songs like “Teachers,” “Passing Through,” “Who by Fire,” “If It be Your Will,” “Here It Is,” etc.]

A lot of what one might call romantic creation is touted here. Ignoring the famous traits of “despair, romantic longing, and cynicism” alongside the idea that “at the same time, there’s a spiritual quality to many of his songs” mentioned as an introductory nothingness on the radio show, when asked where “Famous Blue Raincoat” came from, he replies, “I don’t know, I don’t remember how it arose—I don’t remember how any of them get written.” When asked why he left the Zen center after five or six years of work there, he replies, “I don’t know… I’m never sure why I do anything, to tell you the truth.” About the creation of “Everybody Knows,” “I don’t really remember…. You see, if I really remembered the conditions which produce good songs, I’d try to establish them,” going on to mention the use of napkins, notebooks, etc.

Then there’s the sheer hard labor of it:

You get it but you get it after sweating…. I can’t discard anything unless I finish it, so I have to finish the verses that I discard. So it takes a long time; I have to finish it to know whether it deserves to survive in the song, so in that sense all the songs take a long time. And although the good lines come unbidden, they’re anticipated, and the anticipation involves a patient application to the enterprise.

Of the early-nineties song “Always,” Gross points out that he’s taken a song by Irving Berlin and added a few lines, making it “suddenly very dark and sour.” His quick reply: “Well, you can depend on me for that…”. His is “a kind of drunken version of it.” He’d like to do a song in the vein of those great American songbook lyricists he doesn’t feel equal to:

05_p55347pvbct_1 I have a very limited kind of expression, but I’ve done the best that I can with it, and I’ve worked it as diligently as I can, but I don’t really—except for songs like “Hallelujah,” or “If It be Your Will,” I think those are two of my best songs—I don’t live up to… those great songwriters….

There’s a lot of things I’d like to do, but when you’re actually in the trenches, and, you know, you’re in front of the page or… the guitar or the keyboard under your hands, you know you have to deal with where the05a_225pxlarge_bonfire_1 energy is, what arises, what presents itself with a certain kind of urgency. So, in those final moments, you really don’t choose, you just go where the smoke is, and the flames and the glow or the fire, you just go there.


The Ponies Run the Girls are Young

06_dancerfullgallop2 But enough about composition. My favorite bits are where Cohen plays the role of the [not exactly dirty] old man. Page 56 of his new book carries a poem for a certain Sandy, and what girl doesn’t occasionally want to be the Sandy sung to here? “I know you had to lie to me / I know you had to cheat / To pose all hot and high behind / The veils of sheer deceit / Our perfect porn aristocrat / So elegant and cheap / I’m old but I’m still into that / A06a_victoria_color1b thousand kisses deep.” Age is very present here, and while he’s sung of so many other mortal weaknesses over the past forty-plus years, it seems he had to wait for this particular one to sink into the bones before it began to permeate his work. In four short lines on page 171 you learn the sorrows of the elderly. Go to page 14 to read my favorite tidbit written to a young nun, speaking of staggered births, time disposing of two people whose generations separate them, and whose turn it is to die for love, whose to resurrect. This one is too beautiful to steal from page to pixel.

Betrayal also comes up. In the end the letter writer who sings about that famous blue raincoat has his woman stolen by the letter recipient. In speaking about such games, his age now seems to save him:

07_4512437720056oberitalien332kopie Fortunately I’ve been expelled from that particular dangerous garden, you know, by my age… so I’m not participating in these maneuvers with the frequency that I once did. But I think that when one is in that world, even if the situation does not result in any catastrophic splits as it does in “Famous Blue Raincoat,” one is always, you know, edging, one is always protecting one’s lover, one is always on the edge of a jealous disposition.

Later he specifies that one does not become exempt from that garden, but is just not as welcome. So what are the trade-offs for no longer being welcome? If nothing else, there’s a special voice, which in Cohen’s case is undeniably alluring. He apparently acquired it through, “well, about 500 tons of whiskey and a million cigarettes—fifty, sixty years of smoking…”. I didn’t know tar could be turned to gold.


The Fall

Then comes the most terrifying subject of all, beauty—physical beauty, superficial beauty. We are either enslaved by it, embody it, or attach ourselves to someone who does. He is still oppressed by the figures of beauty, just as he was thirty-two years ago. And here he’s at his most graceful:08_150559628_2670890923_1

I still stagger and fall…. Of course it just happens to me all the time, you just have to get very careful about it, because it’s inappropriate for an elderly chap to register authentically his feelings, you know, because they really could be interpreted, so you really have to get quite covert as you get older… or you have to find some avuncular way of responding, but still, you just, really are just, you’re wounded, you stagger, and you fall.

One feels deeply in love, and thinks this will pass, this is a phase, an infatuation; time or luck will have me grow out of this.

A Monday Musing by Morgan Meis about Cohen is here, and previous Lunar Refractions can be seen here.



Negotiations 8: On Watching the Iranian Soccer Team Crumble Before Their Mexican Counterparts on German Soil

What is the legacy of two thousand years of Christianity? What are the specific qualities that the Christian tradition has instilled and cultivated in the minds of men? They appear to be twofold, and dangerously allied: on the one hand, a more refined sense of truth than any other human civilization has known, an almost uncontrollable drive for absolute spiritual and intellectual certainties. We are speaking of a theology that through St. Thomas Aquinas assimilated into its grand system the genius of Aristotle and whose Inquisitors in the Church bequeathed to modern science its arsenal of weapons for the interrogation of truth. The will to truth in the Christian tradition is overwhelming. On the other hand, we have also inherited the ever-present suspicion that life on this earth is not in itself a supreme value, but is in need of a higher, a transcendental redemption and justification. We feel that there is something wrong with us, or that the world itself needs salvation. Alas, this unholy alliance is bound finally to corrode the very beliefs on which it rests. For the Christian mind, exercised and guided in its search for knowledge by one of the most sophisticated and comprehensive theologies the world has ever seen, has, at the same time, been fashioned and directed by the indelible Christian distrust of the ways of the world. Such a mind will eventually, in a frenzy of intellectual honesty, unmask as humbug what it began by regarding as its highest values. The boundless faith in truth, a joint legacy of Christ and Greek, will in the end dislodge every possible belief in the truth of any faith. For the Christian, belief in God becomes—unbelievable. Ergo Nietzsche:

Nietzschebig_1_1Have you not heard of that madman who lit a lantern in broad daylight, ran to the marketplace and cried incessantly: “I seek God! I seek God!” As many of those who did not believe in God were standing around just then, he provoked much laughter. Has he got lost? asked one. Did he lose his way like a child? asked another. Or is he hiding? Is he afraid of us? Has he gone on a voyage? Emigrated? Thus they yelled and laughed.

The madman jumped into their midst and pierced them with his eyes. “Whither is God? I will tell you. We have killed him—you and I. All of us are his murderers… God is dead. God remains dead. And we have killed him.

His listeners fell silent and stared at him in astonishment. At last he threw his lantern on the ground, and it broke into pieces and went out. “I have come too early,” he said then; “my time is not yet. This tremendous event is still on its way, still wandering; it has not yet reached the ears of men… This deed is still more distant from men than the most distant stars—and yet they have done it themselves.”

God, as Nietzsche puts it, is dead; and you and I, with the relentless little knives of our own intellect—psychology, history, and science—we have killed him. God is dead. Note well the paradox contained in those words. Nietzsche never says that there was no God, but that the Eternal has been vanquished by time, the Immortal has suffered death at the hands of mortals. God is dead. It is a cry mingled of despair and triumph, reducing, by comparison, the whose story of atheism and agnosticism before and after to the level of respectable mediocrity and making it sound like a collection of announcements by bankers who regret that they are unable to invest in an unsafe proposition.

Nietzsche brings to its perverse conclusion a line of religious thought and experience linked to the names of St. Paul, St. Augustine, Pascal, Kierkegaard, and Dostoevsky, minds for whom God was not simply the creator of an order of nature within which man has his clearly defined place, but to whom He came in order to challenge their natural being, making demands which appeared absurd in the light of natural reason.

Nietzsche is the madman, breaking with his sinister news into the marketplace complacency of the pharisees of unbelief. We moderns have done away with God, and yet the report of our deed has not reached us. We know not what we have done, but He who could forgive us is no more. No wonder Nietzsche considers the death of God the greatest event in modern history and the cause of extreme danger. “The story I have to tell is the history of the next two centuries,” he writes. “Where we live, soon nobody will be able to exist.” Men will become enemies, and each his own enemy. From now on, with their sense of faith raging within, frustrated and impotent, men will hate, however many comforts they lavish upon themselves; and they will hate themselves with a new hatred, unconsciously at work in the depths of their souls. True, there will be ever-better reformers of society, ever-better socialists and artists, ever-better hospitals, an ever-increasing intolerance of pain and poverty and suffering and death, and an ever more fanatical craving for the greatest happiness of the greatest numbers. Yet the deepest impulse informing their striving will not be love, and it will not be compassion. Its true source will be the panic-stricken determination not to have to ask the questions that arise in the shadow of God’s death: “What now is the meaning of life? Is there nothing more to our existence than mere passage?” For these are the questions that remind us most painfully that we have done away with the only answers we had.

The time, Nietzsche predicts, is fast approaching when secular crusaders, tools of man’s collective suicide, will devastate the world with their rival claims to compensate for the lost kingdom of Heaven by setting up on earth the ideological economies of democracy and justice, economies which, by the very force of the spiritual derangement involved, will lead to the values of cruelty, exploitation, and slavery. “There will be wars such as have never been waged on Earth. I foresee something terrible, chaos everywhere. Nothing left which is of any value, nothing which commands, ‘Though shalt!’” Ecce homo; behold the man, homo modernus, homo nihilismus. Nihilism—the state of human beings and societies faced with a total eclipse of all values—thrives in the shadow of God’s death. We have vanquished God, but we have yet to vanquish the nihilism that has risen up within us to take God’s place. There is a profound nihilism at work in this world. How are we to deal with this, the legacy of our greatest deed? There is no going back; there can be no going back. We are perched atop a juggernaut; the reins of that sad cart have been passed to us by the four Horseman of modernity—Nietzsche, Freud, Marx and Darwin. Do we heave back on them now? I think not—we must drive them ever faster, until the juggernaut topples and we nimbles, we free spirits, have the opportunity to leap forward and beyond our time. Play more soccer.

Monday, June 5, 2006

Below the Fold: Forget the Sheepskin, and Follow the Money, or Please Don’t Ask What a University is For…

Garbed in cap and gown and subjected probably for the first time in their lives to quaint Latin orations, three quarters of a million students, sheepskin in hand, will bound forth into the national economy, hungry for jobs, economic security, and social advancement. They exit a higher education economy that looks and works more and more like the national economy they now enter. The ivory tower has become the office block, and its professors highly paid workers in an $317 billion dollar business.

Some of this is, of course, old news. From the Berkeley 1964 Free Speech movement onward, the corporate vision of American universities as factories of knowledge production and consumption bureaucratically organized as the late Clark Kerr’s “multiversities,” has been contested, but has largely come to pass.

But even to this insider (confession of interest: I am now completing my 20th year before the university masthead), there are new lows to which my world is sinking. They amount to the transformation of American universities into entrepreneurial firms, and in some cases, multinational corporations.

Most of you by now are used to the fact that universities are big business. The press never stops talking about the $26 billion Harvard endowment, or how the rest of the Ivy League and Stanford are scheming to be nipping at old John Harvard’s much-touched toes. But many non-elite schools are joining the race for big money and to become big businesses. Twenty-two universities now have billion dollar fund-raising campaigns underway. After talking with a colleague from the University of Iowa on another matter, I went to the university web page to discover that Iowa has raised over a billion dollars in a major campaign since 1999 – not bad when you recall that the state itself only has 3 million residents. Even my university, the City University of New York, the ur-urban ladder to social mobility for generations of immigrants and poor, has announced that it is embarking on a billion-dollar crusade.

In addition to billion-dollar endowments, there is revenue to consider. You might be surprised at all of the billion dollar universities in neighborhoods near you. All it really takes to put a university over the billion-dollar revenue mark is a hospital. Iowa, for instance, is a half billion a year all-purpose education shop; add its medical school and hospital system, and its revenue quadruples. A big enrollment urban school like Temple does a billion dollars of health care business in Philadelphia, easily surpassing its educational budget of 660 million. These university budgets often depend as much on the rates of Medicare and Medicaid reimbursement as they do tuitions from their various educational activities.

Tuitions are no small matter, of course, for those who pay them. The elite schools have recently crossed the $40,000 a year threshold, but the perhaps more important and less noticed change in higher education finances is that states are passing more of the burden for public college and university education directly onto the students themselves. The publics enroll three quarters of the nation’s students. As late as the 1980s, according to Katharine Lyall in a January, 2006 article in Change, states paid about half of the cost of their education; now the proportion has dropped to 30%. For instance, only 18% of the University of Michigan’s bills are paid by the state; for the University of Virginia, state support drops to 8%. Baby-boomers on that six-year plan at “old state” where they paid in the hundreds for their semester drinking and drug privileges find themselves now paying an average yearly tuition of $5,500 a year for their kids. When you add in room and board, a year at “old state” now costs an average of $15,500 a year, a figure that is 35% of the median income for a U.S. family of four.

So under-funded are important state universities that they are resorting to tax-like surcharges to make up for chronic state neglect. The University of Illinois, for example, is adding an annual $500 “assessment” on student bills for what the university president Joseph White, as quoted by Dave Newbart in the April 7 Chicago Sun-Times, describes as deferred maintenance. “The roofs are leaking and the caulking is crumbling and the concrete is disintegrating,” President White says. Next year it will cost $17,650 to go to Champaign-Urbana. The state will cover only 25% of Illinois’ costs.

Illinois’ President Newbart may be a bit old-school, and perhaps has lagged back of the pack of higher education industrial leaders. He should get smart. Instead of milking the kids on a per capita basis and incurring undying consumer wrath (after all the plaster was cracked way before I got there, I can hear a student voice or two saying), Newbart should join his peers in a little financial manipulation. What do big firms with billions in assets and large revenue flows do? They sell bonds! So much money, so little interest. And with principal due after a succession of presidents has become so many oil portraits in the board room, so little personal and professional exposure. With the increasingly short tenure of university presidents, even Groucho Marx’s Quincy Adams Wagstaff could get out in time.

American universities have made a very big bet on their future prosperity. They have issued over $33 billion in bonds, according to the May 18 Economist. For the multinationals like Harvard, this is sound money management. To raise working capital, rather than sell some of their stock portfolio at a less than optimal moment or sell the 265-acre arboretum near my house which would diminish the university endowment, Harvard can use its assets as guarantees. The university’s credit is AAA, interest rates are still historically fairly low, and their tax-exempt status makes them attractive investment choices. Harvard can deploy the money in new projects, or re-invest it in higher-yielding instruments and pocket the difference tax-free.

The entrepreneurial universities, that is, those not internationally branded and not elite, are trying to gain a competitive edge. They borrow through bonds to build dormitories, student unions, and to beautify their campuses. Many are borrowing money they don’t have or can’t easily repay. As the saying goes, they are becoming “highly leveraged.” A turn around a town with more than a few universities will likely reveal how it’s raining dorm rooms locally. Here in Boston, it has afflicted universities on both sides of the Charles. Even an avowedly commuter campus like the University of Massachusetts-Boston is building dorms to create that market-defined “campus” feel. Bonds pay for the dorms, and the students through higher rents, pay them off.

The educative value of dorm living, smart remarks aside, is rather problematic. Talking with an old friend who heads an academic department at a Boston university, I have begun to understand, however, the business logic at work. His bosses have explained the situation thus: the last of the baby boomer progeny are passing through the system, and a trough lies behind them. The children of baby-boomers, alas, prefer the reproductive freedoms of their parents, and are having children late as well. International students, full-tuition payers and once the source of great profit, are choosing increasingly non-American universities, for a variety of reasons, some related to our closed-door policy after 9/11. Add income difficulties among the American middle class, and the entrepreneurial universities calculate that they must improve their marketability and take business from others. Expand market share, create new markets (new diplomas, new student populations), or fight to keep even, they reason. Or face decline, now perhaps even a bit more steep since they are into hock for millions of dollars in bond repayments. The “high yield” customer is the traditional customer, a late adolescent of parents with deepish pockets. So dorms, fitness gyms, and student unions it is, and the faculty is mum.

In the great expand-or-die moment occurring among America’s entrepreneurial universities, you would think faculty would be making out, but they aren’t. Let us set aside for another time comment on the highly limited American Idol, star search quests among the elite schools and the entrepreneurs’ somewhat desperate casting about for rainmakers and high-profile individuals who can help in creating a distinctive brand for their paymasters. College and university faculty salaries as a whole since 1970 have stagnated, the U.S. Department of Education reports. Part of the reason is that although the number of faculty has risen 88% since 1975, the actual number of tenured faculty has increased by only 24%, and their proportion of the total has dropped from 37% in 1975 to 24% in 2003. Full-time, non-tenure track and part-time faculty are being used to meet increased demand. Universities are succeeding in gradually eliminating tenure as a condition of future faculty employment.

Forty-three years after Kerr presented his concept of the “multiversity,” the facts conform in many respects to his vision. American universities are massive producers of knowledge commanded by technocrats who guide their experts toward new domains of experiment and scientific discovery. They possess a virtual monopoly on postsecondary education, having adapted over the past half century to provide even the majority share of the nation’s technical and applied professional training.

But swimming with instead of against the stream of American capitalism over the past half century has cost American universities what few degrees of freedom they possessed. They have become captives of corporate capitalism and have adopted its business model. They are reducing faculty to itinerant instructors. Bloated with marketeers, fund-raisers, finance experts, and layers of customer service representatives, they are complicated and expensive to run, and risky to maintain when the demographic clock winds down or competition intensifies. Moreover, as Harry Lewis, a Harvard College dean pushed out by the outgoing President Larry Summers, put rather archly in the May 27 Boston Globe, students whose parents paying more than $40,000 a year “expect the university to treat them customers, not like acolytes in some temple they are privileged to enter.”

As a priest in the temple, it hurts to note how much further down the road we have gone in reducing teaching and learning to a simple commodity. However, in demanding to be treated as customers, students and their parents are simply revealing the huckster we have put behind the veil. Their demands cannot change the course of American universities for the better, but they tell those of us still inside where we stand, and where we must begin anew our struggle.

Random Walks: Band of Brothers

Ufc_hughesgracie_ufcstoreWhile a large part of mainstream America was blissfully enjoying their long Memorial Day weekend, fans of the Ultimate Fighting Championship franchise were glued to their Pay-Per-View TV sets, watching the end of an era. In the pinnacle event of UFC-60, the reigning welterweight champion, Matt Hughes, faced off against UFC legend Royce Gracie — and won, by technical knockout, when the referee stopped the fight  about 4 minutes and 30 seconds into the very first round.

To fully appreciate the significance of Hughes’ achievement, one must know a bit about the UFC’s 12-1/2-year history. The enterprise was founded in 1993 by Royce’s older brother, Rorion Gracie, as a means of proving the brutal effectiveness of his family’s signature style of jujitsu. The concept was simple, yet brilliant: invite fighters from every conceivable style of martial art to compete against each other in a full-contact, no-holds-barred martial arts tournament, with no weight classes, no time limits, and very few taboos. No biting, no fish-hooks to the nostrils or mouth, no eye gouging, and no throat strikes. Everything else was fair game, including groin strikes.

(Admittedly, the fighters tended to honor an unspoken “gentlemen’s agreement” not to make use of groin strikes. That’s why karate master Keith Hackney stirred up such a controversy in UFC-III when he broke that agreement in his match against sumo wrestler Emmanuel Yarbrough and repeatedly pounded on Yarbrough’s groin to escape a hold. I personally never had a problem with Hackney’s decision. He was seriously out-sized, and if you’re going to enter a no-holds-barred tournament, you should expect your opponent to be a little ruthless in a pinch. But the universe meted out its own form of justice: Hackney beat Yarbrough but broke his hand and had to drop out of the tournament.)

The first UFC was an eight-man, round-robin tournament, with each man fighting three times — defeating each opponent while still remaining healthy enough to continue — to reach the final round. Since no state athletic commission would ever consider sanctioning such a brutal event, the UFC was semi-underground, finding its home in places like Denver, Colorado, which had very little regulations in place to monitor full-contact sports. Think Bloodsport, without the deaths, but plenty of blood and broken bones, and a generous sampling of testosterone-induced cheese. (Bikini-clad ring girls, anyone?)

Rorion chose his younger brother, Royce, to defend the family honor because Royce was tall and slim (6’1″, 180 pounds) and not very intimidating in demeanor. He didn’t look like a fighter, not in the least, and with no weight classes, frequently found himself paired against powerful opponents with bulging pecs and biceps who outweighed him by a good 50 pounds or more. And Royce kicked ass, time and again, winning three of the first four UFC events. (In UFC-III, he won his first match against the much-larger Kimo, but the injuries he sustained in the process were sufficient to force him to drop out of the tournament.)

He beat shootfighter Ken Shamrock (who later moved to the more lucrative pro-wrestling circuit) not once, but twice, despite his size disadvantage. Royce_09_1 His technique was just too damned good. Among other things, he knew how to maximize leverage so that he didn’t need to exert nearly as much force to defeat his opponents. Shamrock (pictured at right) has said that Gracie might be lacking in strength, “but he’s very hard to get a hold of, and the way he moves his legs and arms, he always is in a position to sweep or go for a submission.”

UFC fans soon got used to the familiar sight of the pre-fight “Gracie Train”: When his name was announced, Royce would walk to the Octagon, accompanied by a long line of all his brothers, cousins, hell, probably a few uncles and distant cousins just for good measure, each with his hands on the shoulders of the man in front of him as a show of family solidarity and strength. And of course, looking on and beaming with pride, was his revered father, Helio Gracie (now 93), who founded the style as a young man — and then made sure he sired enough sons to carry on the dynasty.

Royce’s crowning achievement arguably occurred in 1994, when, in UFC-IV’s final match, he defeated champion wrestler Dan “The Beast” Severn. Many fight fans consider the fight among the greatest in sports history, and not just because Severn, at 6’2″ and 262 pounds, outweighed Royce by nearly 100 pounds. Technique-wise, the two men were very well-matched, and for over 20 minutes, Severn actually had Royce pinned on his shoulders against the mesh wall of the Octagon. Nobody expected Royce to get out of that predicament, but instead, he pulled off a completely unexpected triangle choke with his legs, forcing Severn to tap out.

For all his swaggering machismo, Royce was one of my heroes in those early days, mostly because I had just started training in a different style of jujitsu (strongly oriented toward self-defense), at a tiny storefront school in Brooklyn called Bay Ridge Dojo. True, it was a much more humble, amateur environment than the world of the UFC, but Royce gave me hope. I trained in a heavy contact, predominantly male dojo, and at 5’7″ and 125 pounds, was frequently outsized by my class mates. My favorite quote by Royce: “I never worry about the size of a man, or his strength. You can’t pick your opponents. If you’re 180 pounds and a guy 250 pounds comes up to you on the street, you can’t tell him you need a weight class and a time limit. You have to defend yourself. If you know the technique, you can defend yourself against anyone, of any size.” And he proved it, time and again.

For smaller mere mortals like me, with less developed technique, size definitely mattered. The stark reality of this was burned into my memory the first time one of the guys kicked me so hard, he knocked me into the wall. Needless to say, there was a heavy physical toll: the occasional bloody nose, odd sprain, broken bone, a dislocated wrist, and a spectacular head injury resulting from a missed block that required 14 stitches. (I still proudly bear a faint, jagged two-inch scar across my forehead. And I never made that mistake again.) I didn’t let any of it faze me. I worked doggedly on improving my technique and hired a personal trainer, packing on an extra 30 pounds of muscle over the course of two years. Not very feminine, I admit: I looked like a beefier version of Xena, Warrior Princess. At least I could take the abuse a little better. In October 2000, I became only the second woman in my system’s history to earn a black belt.

I learned a lot over that seven-year journey. Most importantly, I learned that Royce was right: good technique can compensate for a size and strength disadvantage. It’s just that the greater the size differential, the better your technique has to be, because there is that much less margin for error. And if your opponent is equally skilled — well, that’s when the trouble can start, even for a champion like Royce.

After those early, spectacular victories, Royce faded from the UFC spotlight for awhile, focusing his efforts on the burgeoning Gracie industry: there is now a Gracie jujitsu school in almost every major US city. He’d proved his point, repeatedly, and it’s always wise to quit while you’re at the top. But every now and then he’d re-emerge, just to prove he still had the chops to be a contender. As recently as December 2004, he defeated the 6’8″, 483-pound (!) Chad Rowan in two minutes, 13 seconds, with a simple wrist lock. (“Either submit, or have it broken,” he supposedly said. Rowan wisely submitted.)

The very fact of Royce’s success inevitably caused the sport to change. Fighters were forced to learn groundfighting skills. Back when the UFC was all about martial arts style versus style, many fighters in more traditional disciplines — karate, tae kwon do, kickboxing — had never really learned how to fight effectively on the ground. The moniker changed from No-Holds-Barred, to Mixed Martial Arts — a far more accurate designation these days. Today, the UFC has time limits (with occasional restarts to please the fans, who get bored watching a lengthy stalemate between two world-class grapplers), and even more rules: no hair-pulling, and no breaking fingers and toes. The formula is commercially successful — UFC events typically garner Nielsen ratings on a par with NBA and NHL games on cable television — but these are not conditions that favor the Gracie style. Eventual defeat was practically inevitable.

And so it came to pass over Memorial Day weekend. The UFC torch has passed to Hughes. But Royce’s legacy is incontrovertible. He changed the face of the sport forever by dominating so completely, that he forced everyone else to adapt to him. That’s why he was one of the first three fighters to be inducted into the UFC Hall of Fame (along with Shamrock and Severn). Royce Gracie will always be a legend.

When not taking random walks at 3 Quarks Daily, Jennifer Ouellette muses about physics and culture at her own blog, Cocktail Party Physics.

Talking Pints: 1896, 1932, 1980 and 2008–What Kenny Rogers Can Teach the Democrats

by Mark Blyth

“You got to know when to hold ‘em, know when to fold ‘em, know when to walk away, and know when to run.”

Kenny_rogersKenny Rogers may seem an unlikely choice for the post of Democratic party strategist, but the advice of ‘the Gambler’ may in fact be the single best strategy that the Democrats can embrace when considering how, and who, to run in 2008. Although we are still a long way from the next US Presidential election, the wheels seem to have truly come off the Republicans’ electoral wagon. The ‘political capital’ Bush claimed after his reelection was used up in the failed attempt to privatize Social Security and in the continuing failure to stabilize Iraq. Sensing this, Congressional Republicans (and fellow travelers) increasingly distance themselves from Bush, claiming that, in the manner of small furry passengers who have decided that the cruise was not to their liking after all, the Bushies (and/or the Congressional Republicans) have betrayed the Reagan legacy, that Iraq was a really bad idea all along, and that when its all going to pot you might as well grab what you can in tax cuts for yourselves and head for the exits.

Such un-characteristic implosion from the usually well-oiled Republican machine might lead one to expect the Democrats to make real political inroads for the first time in years. Yet, as the line attributed to Abba Eban about the Palestinians goes, the Democrats “never miss an opportunity to miss an opportunity.” This lack of Democratic political bite, when seen against the backdrop of an already lame-duck second-term President, is remarkable. For example, leading Democrats cannot get a break. Joe Biden makes a ‘major’ policy speech on Iraq, and outside of the New York Times reading ‘chattering classes’ it is roundly ignored. While some Democrats argue for a troop pull-out in Iraq, others in the same party urge ‘stay the course’ thereby ‘mixing the message’ ever further. Even populist rabble rouser Howard Dean, now head of the Democratic National Committee, has all but disappeared from view.

Yet should we be surprised by this? Perhaps the Democrats are a party so used to offering ‘GOP-lite’ that they really have no independent identity. Just as Canada has no identity without reference to the USA (sorry Canada, but you know its true), so the Democrats have no identity without defining themselves against the GOP. But to be against something is not to be for anything. Given that the Republicans are clearly for something, the ‘fact free politics’ of ‘freedom’, ‘prosperity’, ‘lower taxes’, ‘individual initiative’, and other feel-good buzzwords, the Democrats seem to have no one, and no big ideas, to take them forward, except perhaps one person – Hillary Clinton.

Topmast_hillarythumbIts pretty obvious that she wants the job. Much of the media has decided that she already has the Democratic nomination in the bag, but are split on whether she can actually win. To resolve this issue, we need the help of an expert, and this is where I would like to call in Kenny Rogers. Mr. Rogers’ advice is that you have to know when to hold, fold, walk, or run. I would like to suggest that the best thing that the Democratic Party can do is to realize that this next Presidential election is exactly the time to do some serious running; as far away from the White House as possible. I would like to propose the following electoral strategy for the Democrats:

  1. Hillary Clinton must run in 2008. She will lose. This is a good thing.

  2. If the Democrats lose in 2008, they might well win the following three elections.

  3. If the Democrats nominate anyone other than Hillary they might actually win in

    2008, and this would be a disaster.

Ok, how can losing the next election be a good thing for the Democrats? The answer lies in how some elections act as ‘critical junctures’, moments of singular political importance where because an election went in one direction rather than the other, the next several elections went that way too. 1896 was such an election for the Republicans, as was 1932 for the Democrats when they overturned Republican control and began their own long period of political dominance into the 1970s. Indeed, it is worth remembering that the Democratic party used to be the majority party in the US, and that the institutions and policies they set up in the 1930s and 1940s from Social Security to Fannie Mae, are as popular as ever. Indeed, one might add that only one of nine post-WW2 recessions occurred when the Democrats were in power. How then did the Democrats become the weak and voiceless party that they are now? The answer was Ronald Reagan and the critical election of 1980.

RonaldreaganReagan did something that no Democratic politician ever did before, he (or at least those around him) really didn’t give a damn about the federal budget. Reagan managed to combine tax cuts, huge defense expenditure increases, super-high interest rates, and welfare cuts into a single policy package. Despite the supposed ability of voters to see through such scams and recognize that tax cuts now mean tax raises later, Reagan managed to blow a huge hole in federal finances and still be rewarded for it at the ballot box. Despite their fiscal effects, this tax-cutting ‘thing’ became extremely popular, and the Democrats had to find an issue of their own to argue against them. That new issue was the so-called “Twin deficits’ that Reagan’s policies created and the policy response was deficit reduction.

Under Reagan (and Bush the elder) the US ran increasingly large deficits both in the federal budget and in the current account. The Democrats of the day seized on these facts and banged-on and on about them for a decade as if the very lives of Americas’ children depended on resolving them. The problem was however that as the world’s largest economy with the deepest capital markets, so long as foreigners were willing to hold US dollar denominated assets, no one had to pay for these deficits with a consumption loss. The US economy became the equivalent of a giant visa card where the monthly bill was only ever the minimum payment due. Take the fact that no one ever refused US bonds, and add in that most voters would have a hard time explaining what the budget deficit was, let alone why it was this terrible thing that had to be corrected with tax increases, and you have a political weapon as sharp as a doughnut. By arguing for a decade that the twin deficits were real and dangerous, and that tax increases and consumption losses (pain) were the only way forward, the Democrats went from being the party of ‘tax and spend’ to being the party of tax increases and balanced budgets, which simply played into Republican hands.

200pxbill_clintonWhich brings us to why the election of the other Clinton (Bill) in 1992 was not a critical turning point away from Republican politics in the way that 1932 was. Having banged-on about how terrible the deficits were, once in power the Democrats had to do something about them. Being boxed into a fiscal corner, Bill Clinton’s proposals for a budget stimulus and universal healthcare collapsed, and all that was left was (the very worthy) EITC and (the very painful) deficit reduction. Cleaning up the fiscal mess that the Republicans had made became Clinton’s main job, and this helped ensure that by 1996 Clinton was seen as a lame duck President who hadn’t really done anything. His unexpected win in 1996 confirmed this insofar as it resulted in no significant policy initiatives except the act of a Democrat ‘ending welfare as we know it.’ The asset bubble of the late 1990s may have made the economy roar, and Clinton’s reduction of the deficit may have helped in this regard, but the bottom line was that the Democrats were now playing Herbert Hoover to the Republicans’ Daddy Warbucks.

George20bushSo Bush the younger was elected and he continued the same tax cutting agenda, but coupled this to huge post 9-11 military expenditures and the Iraqi adventure. As a result of these policies the US carries current account and federal deficits that would make Reagan blush, the Republicans have a splintering party and support base, and the country as a whole is mired in Iraq with a very unpopular President at the helm. Surely then 2008 can be a new 1932 in a way that 1992 wasn’t? The inauguration of a new era of Democratic dominance? Possibly…but only if the Democrats loose the 2008 election rather than win it. To see why, let us examine what might happen if the Democrats did win the next election with Hillary Clinton at the helm.

In terms of security politics its far more likely that Iraq will go from bad to worse than from worse to better over the next few years. Its a mess a regardless of who is in charge, and the choices at this point seem to be ‘stay forever’ or ‘get out now.’ If the Republicans sense that they are going to lose in 2008 the smart thing to do would be to keep the troops in Iraq so that the Democrats would be the ones who would have to withdraw US forces. When that happens, Iraq goes to hell in a hand-basket, and the Democrats gets blamed for ‘losing Iraq’ and ‘worsening US security.’ If on the other hand Bush pulls US forces out before 2008 and the Democrats win, the local and global security situation worsens, and the probability that ‘something really bad’ happens on the Democrats’ watch rises, which they then get the blame for.

In terms of economic policy the structural problems of the US economy are only going to get worse over time. Since Bush came into office the dollar has lost over a third of its value against the Euro and around 20 percent against other currencies. This means higher import costs, which along with higher oil prices, suggests future inflation and larger deficits. Given that the US relies on foreigners holding US debt, any future inflation and larger deficits would have to be offset with higher interest rates. This would negatively impact the already over-inflated US housing market, perhaps bursting the bubble and causing a deep recession. So, regardless of who is in office in 2008, the economy is likely to be in worse shape then than it is now. If the Democrats are in power and the economy tanks, they will get the blame for these outcomes regardless of the policies that actually brought the recession about.

In terms of cultural politics, social issues are likely to come to a head with the new Roberts’ court finding its feet. It is probably safe to say that there will be an abortion case before the Court during the 2008-2012 cycle, if not before. This is usually treated as the clinching argument for why the Democrats must win the next election rather than lose it. Again, I disagree. Precisely because the only people who still think Bush is doing a good job are conservatives with strong social policy concerns, you can bet they will mobilize to get this policy through even if the rest of the world is crashing about their ears. I say let them have it. The sad truth is that if Roe v Wade is overturned rich white women will still get abortions if they need them, and poor women will not be much worse off since they don’t get access to abortion in most of the country as it is. But more positively, if the Republicans go for this, anyone who says “I’m a moderate Republican” or “I’m socially liberal but believe in low taxes” etc., has to confront an awkward fact. That they self-identify with an extremely conservative social agenda: one that treats women’s bodies in particular, and sexual issues in general, as objects of government regulation. If this comes to pass then the Democrats have a chance to split the Republican base in two, isolate moderates in the party, and turn the Republicans into a permanent far right minority party.

Finally, in terms of electoral politics, the Democrats have to face up to an internal problem – Hillary Clinton really is unelectable. While she may be smart, experienced, popular in the party, and have a shit-load of money behind her, the very appearance of her on television seems to result in the instant emasculation of around 30 million American men. Indeed, 33 percent of the public polled today say that they would definitely vote against her, and this at a time when Bush’s numbers are the worst of any President in two generations. It may be easy to forget how much of a hate figure Hillary Clinton was in the 1990s. One way to remember is to simply search amazon.com for books about Hillary Clinton and see how the ‘hate book’ industry that dogged the Clintons all through the 1990s is moving back into production with a new slew of ‘why she is evil incarnate’ titles. But Hillary Clinton is not just a hate figure for the extreme right. After a decade of mud slinging (that is about to go into high gear again) she is simply too damaged to win. There is a bright side to all this. Hillary Clinton is a huge figure in the Democratic party in terms of fundraising, profile, and ambition. The only way she will get out of the way and allow new figures in the party to come forward who might actually win is by her losing; so let her lose.

In sum, ‘knowing when to walk away, and when to run’ is a lesson the Democrats need to learn, and losing in 2008 would be ‘the Gambler’s’ recommendation. First, making the Republicans clean up their own mess would not only be pleasing to the eye, it would be electorally advantageous. Forcing the Republicans to accept ownership of the mess that they have made makes their ability to ‘pass the buck’ onto the Democrats, as happened to Bill Clinton, null and void. Clearly, from the point of view of Democratic voters the probable consequences of a third Republican victory have serious short-term costs associated with them, but it is also the case that the possible long term benefits of delegitimating their policies, watching their base shatter, and not having to clean up their mess and get blamed for it, could be greater still. Second, if the Democrats do win, then all the problems of Iraq, the declining dollar, the federal and trade deficits, higher interest rates, a popping of the housing bubble, a possible deep recession, and being blamed for the end of ‘the visa card economy’, become identified with the Democrats. They come in, get blamed for ending the party, clean up the mess, and get punished for it at the next election. Seriously, why do this? Third, if it is the case that Hillary Clinton will indeed get the nomination, then let her have it. She cannot win, so why not kill two birds with one stone. Nominate Hillary, run hard, and lose. That way Hillary cannot not get nominated again, new blood comes into the party, and the Republicans have to clean up their own mess. Do this, and 2012 really might be 1932 all over again.

Mark Blyth’s other Talking Pints columns can be seen here.

NOTE: This essay is posted by Abbas Raza due to a problem with Mark’s computer.

Richard Wagner: Orpheus Ascending

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

A reassessment of Wagner and Wagnerism

The following (Part 1 June, Part 2 July, Part 3 August) is excerpted from a talk originally given at the Goethe Institut in Sydney on April 18, 1999 and subsequently published in London by the Wagner Society of the United Kingdom in 2001.

Part 1:

There are several versions of the ancient Greek myth of Orpheus. In the best known of these Orpheus goes down to the Underworld to seek the return of his wife Eurydice who had been killed by the bite of a snake. The lord of the Underworld agrees on the condition that Orpheus should not turn round and look at Eurydice until they reach the Upper World. The great singer and musician who could charm trees, animals and even stones could not survive this final and most perilous of temptations. He turns to look on his beloved wife and she is lost to him forever. Another version of the myth tells of Orpheus being torn to pieces by the Thracian women or Maenads; his severed head floated, singing, to Lesbos.

Wagner01Wagner’s dismembered head continues to sing, unheard by many and misunderstood by most. That beautiful yet volatile singing head with its Janus face, enigmatically poised between black holes and galaxies; that is the head we still find puzzling. And because our civilisation does not like puzzles, and wishes to rationalise whatever has provoked it to think or feel, our best critical efforts have reduced one of the greatest creative and cultural phenomenon of Western culture to manageable proportions.

Well, Wagner continues to ascend, leaving Wagnerism behind to do battle on any number of fronts, whether at Bayreuth with its interminable family squabbling, or in the raft of prose that has followed in the wake of the German gigantomane, or through those cliches that Wagnerian ideology has left us with as the Valkyries ride their helicopters across a Vietnamese apocalypse or another wedding is inaugurated to the strains of the bridal chorus from Lohengrin.

That is how we now manage the Wagnerian cosmos. Cliche helps us to feel comfortable near this unquiet grave with its all-too-human disturbing element. Humour helps us, and it’s necessary—we haven’t the fortitude, the talent, the persistence, or, indeed, the genius, to bring into being imperishable works of art. We like to laugh when Anna Russell tells us, ‘I’m not making this up you know’. But of course that is exactly what Wagner did do; he made up an entire aesthetic and cultural world that we still have not been able to come to terms with. We try from time to time to make sense of the life and the work, but never without those attendant twins, partisanship and antagonism.

So Wagner is ascending, like Orpheus, to his place in the cultural imperium, alienated from the world’s embrace, a lonely figure, so lonely in his own life, and lonely still. Liszt saw all too clearly what Wagner would have to accustom himself to, and in a letter to his friend, when Wagner intimated thoughts of ending it all, advised, ‘You want to go into the wide world to live, to enjoy, to luxuriate. I should be only too glad if you could, but do you not feel that the sting and the wound you have in your own heart will leave you nowhere and can never be cured? Your greatness is your misery; both are inseparably connected, and must pain and torture you until you kneel down and let both be merged in faith!’ [Hueffer. Correspondence of Wagner and Liszt, Vienna House, 1973, Vol One, p 273]

The world may celebrate his work, technology may bring his music to every part of the planet, yet another monograph may be published; Wagner turns to look for his audience, and at once he loses that audience. The bloodlust that can be unleashed by a good Wagner performance, the obsessions notoriously associated with Wagnerism, the strident approbation and denunciation—these are not the cultural signifiers of classicism freely given to Shakespeare, Mozart or Goethe. Wagner evades classicism still; yet that is his predestined end. We are still too close to the psychic firestorm of his imagination, and we are still too disturbed by the misuse of his art, for that classicism to show any signs of emerging. Even as passionate a Wagnerian as Michael Tanner cannot bring himself to fully equate Shakespeare and Wagner, an equation that cultural history proposes but which we are not yet up to accepting.

Recently a German said to me, ‘You English have your Shakespeare; we have our Wagner.’ Granted my passion for Wagner and my lack of cultural chauvinism it was perhaps odd that I was so shocked by her remark. I didn’t want to say, ‘But Shakespeare is greater than Wagner.’ But I did feel a strong urge to protest about Wagner’s being seen as part of the cultural landscape in the same way that Shakespeare is (even for Germans, thanks to Tieck and Schlegel). [Michael Tanner, Wagner. HarperCollins, 1996, p 211]

Wagner’s uncertain cultural status reaches beyond our historical moment. Perhaps a Hegelian analogy is best: thesis, antithesis, synthesis. The life, 1813-1883, represents the thesis—and what a proposition it is. The twentieth century represents the antithesis replete with reductionism, antagonism, equally disreputable fanaticism and hatred. It remains for the future to offer the synthesis. And when that synthesis occurs, then Wagner will have ascended to the Upper World; his audience will not flinch from looking at him directly. Shakespeare was lucky not to have left much biographical debris behind. When the biographers and critics got to work, the focus of their studies was necessarily on the plays and poems themselves. The lacerated spirit that gave birth to the murderous rampage of a Macbeth, the suicidal melancholy of a Hamlet or the self-hatred and disgust of a Lear was easily accommodated to textual analysis and theorising because biographical motive was missing. The hunt for the Dark Lady of the Sonnets was a pastime for some but, on the whole, scholars were prepared to indulge Shakespeare’s evident greatness. Only recently have they come around to asking why Shakespeare’s younger daughter couldn’t write. No such luck for Wagner. There is enough biographical material laid on the line to keep critics in clover until the end of time—letters, autobiographies, diaries, pamphlets, theoretical writings. And that’s just the primary material. Has any scholar yet read all of it? Then there is the secondary material and we know that it is now beyond the ability of anyone to read it, let alone make sense of it. This deluge of material shows no sign of abating. Are we now any closer to understanding the phenomenon of Wagner? Wagnerism seems to be one of the chief ways with which we seek to cope with what is now considered to be the ‘problem’ of Wagner.

Thus two mutually antagonistic modes of thinking fail to reach any accommodation with one another. It seems that Wagnerian historiography must advance, not by the slow accumulation of historical and cultural detail, but always explosively, so that an apparent understanding of events is wrenched apart by either previously unknown factual details or fresh polishing of a facet of the Wagnerian rough diamond.

[Parts 2 and 3 of Orpheus Ascending can be read here and here.]

Monday Musing: Susan Sontag, Part I

In an essay about the Polish writer Adam Zagajewski, Sontag writes that as Zagajewski matured he managed to find “the right openness, the right calmness, the right inwardness (he says he can only write when he feels happy, peaceful.) Exaltation-—and who can gainsay this judgment from a member of the generation of ’68—is viewed with a skeptical eye.” She’s writing about what Zagajewski was able to achieve but she is also, of course, writing about herself.

Sontag was also a member of the generation of ’68, if a slightly older one. She too achieved an openness, calm, and inwardness as she matured, though it came with regrets and the sense that the pleasure of a literary life is an ongoing battle against a world that is predisposed to betray that pleasure.

Writing about Zagajewski again, she explains that his temperament was forged in the fires of an age of heroism, an ethical rigor made sharp by the demands of history. These men and women spent decades trying to write themselves out of totalitarianism, or they were trying to salvage something of their selves from what Sontag does not hesitate to call a “flagrantly evil public world”. And then suddenly, in 1989, it was all over. The balloon popped, the Wall came down. Wonderful events, no doubt, but with the end of that era came the end of the literary heroism made possible by its constraints. Sontag says, “how to negotiate a soft landing onto the new lowland of diminished moral expectations and shabby artistic standards is the problem of all the Central European writers whose tenacities were forged in the bad old days.”

Sontag also managed to come in softly after scaling the heights of a more exuberant time. In Sontag’s case, she wasn’t returning to earth after the struggle against a failing totalitarianism, she was coming down from the Sixties. But that is one of the most remarkable things about her. Not everyone was able to achieve such a soft landing after the turbulence and utopian yearnings of those years.

Sontag’s early writings are shot through with a sense of utopian exaltation, an exaltation so often associated with the Sixties. In her most ostensibly political work, “Trip to Hanoi”, she talks specifically about her mood in those days. As always, she is careful not to overstate things. “I came back from Hanoi considerably chastened,” she says. But then she goes on, heating up. “To describe what is promising, it’s perhaps imprudent to invoke the promiscuous ideal of revolution. Still, it would be a mistake to underestimate the amount of diffuse yearning for radical change pulsing through this society. Increasing numbers of people do realize that we must have a more generous, more humane way of being with each other; and great, probably convulsive social changes are needed to create these psychic changes.”

You won’t find Sontag in a more exalted state than that. Rarely, indeed, does she allow herself to become so agitated and unguarded, especially in the realm of the outwardly political. But that is exactly where one must interpret Sontag’s politics, and exaltation, extremely carefully.

Sontag’s political instincts gravitate toward the individual, in exactly the same way that she reverses the standard quasi-Marxian directions of causality in the above quote. Marxists generally want to transform consciousness as the necessary first step toward changing the world. In contrast, Sontag wants the world to change so that we can get a little more pleasure out of consciousness. Convulsive social changes, for Sontag, are but extreme measures for affecting a transformation that terminates in psychic changes. Politics means nothing if it obscures the solid core of the individual self. Her commitment to this idea gives all of her writing a Stoic ring even though she never puts forward a theory of the self or a formal ethics. It is the focus on her particular brand of pleasure that provides the key. Pleasure and the Self are so deeply intertwined in Sontag’s writing that one cannot even be conceived without the other.

Writing years later, in 1982, about Roland Barthes, Sontag spoke again pleasure and the individual self. Barthes great freedom as a writer was, for Sontag, tied up with his ability to assert himself in individual acts of understanding. Continuing a French tradition that goes back at least to Montaigne (a man not unaware of the Stoics), she argues that Barthes’ writing “construes the self as the locus of all possibilities, avid, unafraid of contradiction (nothing need be lost, everything may be gained), and the exercise of consciousness as a life’s highest aim, because only through becoming fully conscious may one be free.” She speaks about the life of the mind as a “life of desire, of full intelligence and pleasure.”

A human mind, i.e., an individual mind, will, at its best, be ‘more generous’ and ‘more humane’. But for Sontag, it is what humans have access to in the world of ideas, as individual thinking agents, that marks out the highest arena of accomplishment.

“Of course, I could live in Vietnam,” she writes in A Trip to Hanoi, “or an ethical society like this one—but not without the loss of a big part of myself. Though I believe incorporation into such a society will greatly improve the lives of most people in the world (and therefore support the advent of such societies), I imagine it will in many ways impoverish mine. I live in an unethical society that coarsens the sensibilities and thwarts the capacities for goodness of most people but makes available for minority consumption an astonishing array of intellectual and aesthetic pleasures. Those who don’t enjoy (in both senses) my pleasures have every right, from their side, to regard my consciousness as spoiled, corrupt, decadent. I, from my side, can’t deny the immense richness of these pleasures, or my addiction to them.”

Sontag’s political thinking is driven by the idea that what is otherwise ethical, is often thereby sequestered from what is great, and what is otherwise great, is often mired in the unethical. She never stopped worrying about this problem and she ended her life as conflicted about it as ever. It was a complication that, in the end, she embraced as one of the interesting, if troubling, things about the world.

But for a few brief moments, as the Sixties ratcheted themselves up year after year, she indulged herself in considering the possibility that the conflict between ethics and greatness could be resolved into a greater unity. She thought a little bit about revolution and totality. She got excited, exalted. Summing up thoughts about one of her favorite essays, Kleist’s “On the Puppet Theater,” Sontag leaves the door open for a quasi-Hegelian form of historical transcendence. She says, “We have no choice but to go to the end of thought, there (perhaps), in total self-consciousness, to recover grace and innocence.” Notice the parenthesis on ‘perhaps’. She’s aware that she (and Kleist) are stretching things by saying so, but she can’t help allowing for the possibility of ‘total self-consciousness’. Often, when Sontag uses parentheses she is allowing us a glimpse into her speculative, utopian side.

In “The Aesthetics of Silence (1967),” for instance, she equates the modern function of art with spirituality. She defines this spirituality (putting the entire sentence in parenthesis). “(Spirituality = plans, terminologies, ideas of deportment aimed at resolving the painful structural contradictions inherent in the human situation, at the completion of human consciousness, at transcendence.)”.

***

In the amazing, brilliant essays that make up the volume Against Interpretation it is possible to discover more about the utopian side of Sontag’s thinking. Drawing inspiration from Walter Benjamin, whose own ideas on art explored its radically transformative, even messianic potential, Sontag muses that, “What we are witnessing is not so much a conflict of cultures as the creation of a new (potentially unitary) kind of sensibility. This new sensibility is rooted, as it must be, in our experience, experiences which are new in the history of humanity…”

Again with the parenthesis. It is as if, like Socrates, she always had a daimon on her shoulder warning her about pushing her speculations too far. But the talk of unity is an indication of the degree to which she was inspired by the events of the time, or perhaps more than the specific events of the time, by the mood and feel of the time. Her sense that there was an “opening up” of experience, sensibility, and consciousness drove Sontag to attack certain distinctions and dichotomies she saw as moribund. Again following closely in the footsteps of Walter Benjamin and his influential “Art in the Age of Mechanical Reproduction” she writes, “Art, which arose in human society as a magical-religious operation, and passed over into a technique for depicting and commenting on secular reality, has in our own time arrogated itself a new function…. Art today is a new kind of instrument, an instrument for modifying consciousness and organizing new modes of sensibility.” This led her to a central thesis, a thesis that drove her thinking throughout the Sixties, a thesis that is nestled into every essay that makes up Against Interpretation. She sums it up thusly:

“All kinds of conventionally accepted boundaries have thereby been challenged: not just the one between the ‘scientific’ and the ‘literary-artistic’ cultures, or the one between ‘art’ and ‘non-art’; but also many established distinctions within the world of culture itself—that between form and content, the frivolous and the serious, and (a favorite of literary intellectuals) ‘high’ and ‘low’ culture.”

Sontag’s famous “Notes on ‘Camp’” is simply a sustained attempt to follow that thesis through. Her defense of camp is a defense of the idea that worth can be found in areas normally, at least back in the Sixties, relegated to the realm of the unserious. The new unity was going to raise everything into the realm of the intellectually interesting, and pleasurable.

Yet, Sontag is not trying to abolish all distinctions. It isn’t a leveling instinct. Even in her youngest days, Sontag was suspicious of the radically democratic impulses that would, say, collapse art and entertainment. Sontag is doing something different. She is trying to show that the arena for aesthetic pleasure should be vastly expanded, but never diluted. She wants the new critical eye to stay sharp and hard. Sontag’s version of pleasure is an exacting one. It is relentless and crystalline. It is an effort.

“Another way of characterizing the present cultural situation, in its most creative aspects, would be to speak of a new attitude toward pleasure. . . . Having one’s sensorium challenged or stretched hurts. . . . And the new languages which the interesting art of our time speaks are frustrating to the sensibilities of most educated people.”

In this, there was always an element of the pedagogue in Sontag. She was trying to teach a generation how to tackle that frustration in the name of aesthetic pleasure. She was driven by her amazing, insatiable greed for greater pleasure. She wanted us to be able to see how many interesting and challenging things there are in her world of art, a world vaster and richer than the one surveyed by the standard critical eye of her time. And at least in the Sixties, her passion for greatness and its pleasures spilled over into a yearning for a societal transformation that would make that passion and pleasure universal…

to be continued…

Monday, May 29, 2006

Teaser Appetizer: The Definition of Health

The world health organization (WHO) defines health as “A state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” This definition entered the books between 1946 and 1948 and has remained unchanged.

Current medical knowledge is desperately struggling — only with partial success –just to “merely” control “disease or infirmity.” while “complete well being” is unlikely to sprout out of our incomplete knowledge If your politicians were to legislate health by this definition, they will be in default for ever for one obvious reason: no nation – I repeat – no nation has the knowledge or the resources to deliver care to match this definition. We all learnt in the kinder garden – well except the politicians – not to promise what we can not fulfill.

This definition is a lofty, laudable visionary statement that may reflect a distant aspiration but its realization is elusive in current practice. In all humility, we should concede that “complete — well being” is a probable unquantifiable metaphysical state which is unattainable without taming nature’s evolutionary laws of life and death. And to presume that we have the ability to do so is a whiff of arrogance – an aromatic trait our species emits in abundance.

The realization of this dream was probably considered feasible in 1948, when we had made a quantum leap in understanding infectious diseases and for the first time in human history, we were exuberant in our demonstrated ability to extend longevity by about twenty years in some countries But that was far before we could predict the explosion of health technology and understand its consequential individual, societal and economic effects.

Isn’t it time we seek a second opinion on the health of this definition and evolve a flexible definition which encompasses the current reality and is malleable enough to accommodate future developments?

While the WHO definition stays seemingly immutable, a new framework linked to human rights has evolved: The human right to health paradigm reiterates: the enjoyment of highest attainable standard of health is a fundamental right of every human being This linkage has provided an inspirational tool to demand “health..” The tenor of this discourse takes a cue from the rhetoric of Kofi Anan: “It is my aspiration that health will finally be seen not as a blessing to be wished for; but as a human right to be fought for.”

This paradigm recognizes that violation of human rights has serious health consequences and promoting equitable health is a prerequisite to development of the society. The discourse rightly demands abolition of slavery, torture, abuse of children, harmful traditional practices and also seeks access to adequate health care without discrimination, safe drinking water and sanitation, safe work environment, equitable distribution of food, adequate housing, access to health information and gender sensitivity.

All nations are now signatories to at least one human rights treaty that includes health rights. One hundred and nine countries had guaranteed right to health in their constitutions by the year 2001 which qualifies it as an effective instrument for policy change; but it also raises some difficult questions.

Human rights discourse uses the words health and health care interchangeably. Rony Brauman, past president of Médecins Sans Frontières comments: “WHO’s definition of a “right to health” is hopelessly ambiguous. I have never seen any real analysis of what is meant by the concept of “health” and “health for all,” nor do I understand how anyone could seriously defend this notion.” The notion is more defensible if the demand of health care replaced the demand for health.

Yet no country in the world can afford to give all health care to all its citizens all the time. Nations conduct a triage of priorities according to their prejudices and large swaths of populations are not caught in the health care net. Even nations that have right to health embedded in the constitution face a gap between the aspirations and resources.

The human rights debate skirts round the issue by invoking the “Principle of progressive realization”, which allows resource strapped countries to promise increments in health care delivery in future This effectively gives a tool to the governments to ration and allocate resources, even if it conflicts with individual rights.

The following example illustrates the problem: post apartheid government of South Africa had enshrined the right to health in the constitution, yet the courts decided against a petitioner who demanded dialysis that he needed for chronic kidney failure. The court ruled that the government did not have an obligation to provide treatment. The court in essence transferred some responsibility to the individual.

Gandhi had also expressed his concern that rights without responsibility are a blunder. A responsibility paradigm could supplement the rights movement; a pound of responsibility could prove to be heavier than a ton of rights, but the current noise for rights has muzzled the speech for responsibility and “Complete health” is becoming an entitlement to be ensured by the state without demanding that the family and the individual be equal stake holders. Hippocrates said “a wise man ought to realize that health is his most valuable possession and learn to treat his illnesses by his own judgment”

This conflict will escalate further with the impact of biotechnology. A quote from Craig Venter gives the feel: “It will inevitably be revealed that there are strong genetic components associated with most aspects of what we attribute to human existence — the danger rests with what we already know: that we are not all created equal. —- revealing the genetic basis of personality and behavior will create societal conflicts.”

Derek Yach, a respected public health expert and professor at Yale University says “With advances in technology, particularly in the fields of imaging and genetic screening, we now recognize that almost all of the population either has an actual or potential predisposition to some future disease.”

We can’t help but rethink about health itself before we promise health care. An alternative definition can be derived from the health field concept of Marc Lalonde who was the health minister of Canada in1974. He surmised that interplay of four elements determined health, namely: genetic makeup, environment including social factors, individual behavior and organization of health care. The health field model holds many stake holders accountable.

Each stake holder approaches health with a seemingly different goal. (Even though they complement each other) A healthy person wishes not to fall sick; a sick person demands quick relief; a health care provider attempts to cure and prevent disease; a molecular biologist envisions control of molecular dysfunction; a public health person allocates resources to benefit maximum number of people; a health economist juggles finances within the budget; the government facilitates or hampers the delivery of care according to its priorities and the activist demands that every person has the right to the” Highest attainable standard of physical and mental health.”

Many stake holders mean more questions than answers. Who decides the limits of health a society should attain? Shall the boundary limit to basic primary care or extend to genetic manipulation to deliver well being? Who decides the mechanism of attaining that limit? Who decides positive mental well being? And who pays for it?

It is apparent that ‘Complete well being’ is as much an oxymoron as ‘airline food!’ We urgently need a new definition as a starting point for debate: a definition that is quantifiable for outcomes, accommodative of stake holders, absorbent of future advances, accountable for delivery of care and cognizant of limitations. The new definition has to be both correct and politically correct. Dr. Brundtland, former director-general of the WHO, wrote in the world health report that “The objective of good health is twofold – goodness and fairness; goodness being the best attainable average level; and fairness, the smallest feasible differences among individuals and groups.” We should match our expectations to reality.

These elements, compressed and enveloped into a workable statement, may sound as follows:

Health is a state of freedom from physical and mental consequences of molecular and psychological derangements caused by the interaction of individual biology and the environment; health care is an attempt to reverse such derangement by providing equitable access to all without discrimination within the constraints of available resources and knowledge.

You may call this, if you please: the 3QD definition of health — you read it here first!

Dispatches: Affronter Rafael Nadal

Roland Garros, or tennis’ French Open, started yesterday.  Perhaps you’ve noticed; articles ran in most Sunday papers about it, quite extensive ones too, considering that the French has often been viewed as a third-rate (after Wimbledon and the U.S. Open) Grand Slam tournament, largely because it is usually won by a cadre of specialists instead of the best-known players.  Not only is this perception unfair, but, this year, Roland Garros will be the most important men’s tennis tournament of the year.  Here’s why.

The increasing specialization of tennis has meant that this tournament, the only Grand Slam played on clay, has a set of contenders that is quite distinct from those at the grass courts of Wimbledon and the hardcourts of Flushing Meadows, Queens.  Not only has it been won by players who have not been dominant on the other surfaces, but it has been very difficult for anyone to enjoy repeat success sur la terre battue.  Ten of the last twelve Wimbledons were won by Pete Sampras and Roger Federer; the last five winners of Roland Garros are Gustavo Kuerten, Albert Costa, Juan Carlos Ferrero, Gaston Gaudio, and Rafael Nadal.  I’m going to try to explain both phenomena (specialized success and lack of repeat dominance) below.

Why does it make a difference what surface the game is played on, and what difference does it make?  Basically, the surface affects three things: the speed of the ball after it bounces, the height of the ball’s bounce, and the player’s level of traction on court.  In terms of the speed of the ball and height of its bounce, clay is the slowest and highest, and grass is the fastest and lowest, with hardcourt in the middle.  This results in differing strategies for success on each surface, with grass rewarding aggressive quick strikes – with the speed of the ball and the low bounce, you can ‘hit through’ the court and past the other player with relative ease.  For this reason, the great grass-court players have mostly been offensive players, who use serve-and-volley tactics (i.e., serving and coming to net to take the next ball out of the air).  Clay, on the other hand, reverses this in favor of the defensive player: the slow, high bounce means it is very tough to hit past an opponent, and points must be won by attrition, after long rallies in which slight positional advantages are constantly being negotiated before a killing stroke.  Clay-court tennis is exhausting, brutal work.

Clay and grass, then, are opposed, slow and fast, when it comes to the ball.  How then did Bjorn Borg, perhaps the greatest modern player (he accomplished more before his premature retirement at twenty-five than anyone other than Sampras) manage to win Roland Garros (clay) six times and Wimbledon (grass) five but never a major tournament on the medium paced surface, hardcourt?  The third variable comes into play here: traction.  Clay, and, to a lesser extent, grass, provide negative traction.  That is, you slip when you plant your foot and push off.  Hardcourt provides positive traction – your foot sticks.  Consequently, entirely different styles of quickness are needed.  Borg didn’t like positive traction.  On clay, particularly, players slide balletically into the ball, the timing for which skill is developed during childhood by the most talented players, most of whom grew up in countries where clay courts are the rule: Spain, Italy, Argentina, Chile, Brazil.  Grass is not as slidey, but offers less traction than the sticky hardcourts, and like clay, grass’ uneven natural surface produces unpredictable hops and bounces, frustrating the expectations of the more lab-conditioned hardcourt players.

So, clay slows the ball and provides poor footing, both of which qualities means that it’s ruled by an armada of players who grow up playing on it and mastering the movement and strategic ploys it favors.  Perhaps foremost among these is the dropshot, which works because the high bounce of the clay court drives players way back and sets them up for the dropper.  This explains the dominance of the clay specialists, but why has the title switched off among so many players lately?  For the most part, this is because of the grinding nature of clay.  So much effort must be expended to win a match (five sets on clay can take five hours of grueling back-and-forth; in contrast, bang-bang tennis on grass can be practically anaerobic), that players tire over the course of the tournament, and so much depends upon perseverance that a superhuman effort will often overcome a greater talent.  It just so happens that last year there emerged a player who combines the greatest clay talent with the greatest amount of effort, but more on him below.  For now, let me return to my claim that this edition of the French is the most important men’s tennis event this year.

Historically, the greatest offensive players (meaning players who try to dictate play and win points outright, rather than counterpunchers, who wait for their opening, or retrievers, who wait for you to mess up), have been unsuccessful at Roland Garros, while the defensive fiends who win in Paris have been unsuccessful on grass.  (Borg, a counterpunching genius, is the great exception.)  The best attackers, namely John McEnroe, Boris Becker, Stefan Edberg, and of course Pete Sampras, have won zero French Opens, while Ivan Lendl, a three-time Roland Garros winner, narrowly failed in his endearing late-career quest to win Wimbledon (all of these players won major titles on hardcourts as well).  The only man since 1970, in fact, to win all four major titles (known as the Grand Slam tournaments), on the three disparate surfaces, is one Andre Agassi, a hybrid offensive baseliner.  This has made the dream of winning all four Slams in a single year, a feat also known, confusingly, as winning the Grand Slam–last accomplished by Rod Laver in 1969–seem pretty quixotic nowadays.  Until now.  The game’s best current offensive player is also an excellent defensive player, and an extremely competent mover and slider on clay.  Roger Federer has the best chance of anyone since Agassi to win the career Grand Slam, and, as the holder of the last Wimbledon, U.S. Open, and Australian titles, could win his fourth straight major this month (a feat he is calling, with a little Swiss hubris, the “Roger Slam”).  If he succeeds this year at Roland Garros, he’ll accomplish something Sampras couldn’t, and if he does I think it’s almost inevitable that he’ll sweep London and Flushing and complete the calendar Grand Slam as well. 

Standing in the way of Federer’s c.v.-building efforts is the aforementioned combination of talent and drive, the nineteen-year-old Mallorcan prodigy Rafael Nadal.  He had one of the finest seasons I’ve ever seen last year, absolutely destroying the field on clay, winning Roland Garros, winning over Agassi in Montreal and over Ljubicic in Madrid.  He’s now won a record 54 matches on clay without a loss.  Not only does Nadal’s astonishing effort level intimidate opponents, but he is surprisingly skilled, a bulldog with the delicacy of a fox.  You can see him break opponents’ spirits over the course of matches, endlessly prolonging rallies with amazing ‘gets,’ or retrievals, which he somehow manages to flick into offensive shots rather than desperate lobs.  When behind, he plays even better until he catches up.  His rippling physique and indefatigable, undying intensity make him literally scary to face on clay.  And yet, when off the court, he is a personable and kind presence at this stage of his young life.  All in all, a player this brutal has no business being this likable, but there it, and he, is.

Nadal and Federer have played six times: Nadal has won five, and held a huge lead in the other before wilting on a hardcourt.  Let me underline here just how anomalous this state of affairs is: here we have the world number one on a historic run of victories, and yet he cannot beat number two.  Federer has lost his last three matches with Nadal; with all other players, he has lost three of his last one hundred and nineteen matches.  Rafa is the only player on whom Federer cannot impose his will; indeed, Federer must try and quickly end points against Nadal to avoid being imposed upon.  In the final at Rome two weeks ago, Federer unveiled a new strategy, coming in to net whenever the opportunity arose, though not directly following his serve.  Federer’s flexibility, his ability to adopt new tactics, made for a delicious and breathtaking final, which he led 4-1 in the fifth and final set, and held two match points at 5-4.  Here Nadal’s hypnotic retrieving unnerved him once again, and two errors led the match to a fifth-set tiebreaker.  In a microcosmic repetition, Federer again led (5-3 and serving) and again let the lead slip away.  Nadal, after a full five hours, took the title and reconfirmed his psychological edge, even over the most dominant player of the last twenty years.  His confidence will be nearly unimpeachable, where Federer’s will be shaken by losing a match in which he played the best clay-court tennis of his life.  If, as expected, they play again in the final of Roland Garros, for all the marbles, you’re going to see the most anticipated tennis match in several years.

(Note: I have gone on for way too long without handicapping the women’s field, for which I apologize.  I’ll just say here that I am hopeful that France’s glorious all-court player, Amelie Mauresmo, will win.)

See All Dispatches.

Selected Minor Works: Why We Do Not Eat Our Dead

Justin E. H. Smith

[An extensive archive of Justin Smith’s writing is now online at www.jehsmith.com]

Now that an “extreme” cookbook has hit the shelves offering, among other things, recipes for human flesh (Gastronaut, Stefan Gates, Harcourt, 257 pages; paperback, $14), perhaps our gross-out, jack-ass culture has reached the point where it is necessary to explain why these must remain untried.

I will take it for granted that we all agree murder is wrong. But this alone is no argument against anthropophagy, for people die all the time, and current practice is to let their remains simply go to waste. Why not take advantage of the protein-rich corpses of our fallen comrades or our beloved elderly relatives who have, as they say, “passed”? Surely this would not be to harm them or to violate their integrity, since the morally relevant being has already departed or (depending on your view of things) vanished, and what’s left will have its integrity stolen soon enough by flame or earth. Our dearly departed clearly have no objections to such a fate: they are dead, after all. Could we not then imagine a culture in which cannibalizing our dead were perfectly acceptable, perhaps even a way of honoring those we loved?

The fact that we do not eat our dead, in spite of their manifest indifference, has been duly noted by some participants in the animal-rights debate. They think this reveals that whatever moral reasoning goes into our decisions about what sort of creature may be eaten and what must be left alone, it simply is not, for most of us, the potential suffering of the creature that makes the moral difference. Whereas Peter Singer believes that we should stop eating animals because they are capable of suffering, others have responded that this is beside the point, since we also make humans suffer in multifarious ways. We just don’t eat them.

But again, why not? Some moral philosophers have argued that the prohibition has to do with respect for the memory of the deceased, but this can’t get to the heart of it, since there’s no obvious reason why eating a creature is disrespectful to it.

It may be the answer is simply that, as a species, we are carrion-avoiders. After all, it is not just the vegetarian who will not eat a cow struck by lightning, but the carnivore as well. Put another way: we do not eat fallen humans, but we also do not eat fallen animals; we eat slaughtered animals. It is then perhaps not so much the fact that dead humans are (or were) human that prevents us from eating them, but the fact they are carrion, and that we, as a species, are not scavengers.

Consider in this connection the Islamic Shariah laws that one must follow if one wishes to eat a camel that has fallen down a well (I turn here to the version of the rules stated as stated by the Grand Ayatollah Sistani): “[If the camel] falls down into a well and one feels that it will die there and it will not be possible to slaughter it according to Shariah, one should inflict a severe wound on any part of its body, so that it dies as a result of that wound. Then it becomes… halal to eat.”

Now, why is it considered so important to inflict a fatal wound before the camel dies as a result of its fall? Though this is but one culture’s rule, it seems to be the expression of a widespread prohibition on eating accidentally dead animals. In the case of the camel, an animal that is about to die from an accident, and the instruction is: if you want to eat it, you better hurry up and kill it before it dies! This suggests that people do not slaughter simply so that a creature will be dead, but rather so that it will be dead in a certain way. Relatedly, in the southern United States, roadkill cookbooks are sold in souvenir shops as novelty items, and the novelty consists precisely in the fact that tourists are revolted and amused by the thought of the locals scavenging like vultures.

Of course, human beings do in fact eat other human beings, just not those dead of natural or accidental causes. Some decades ago, the reality of cannibalism was a matter of controversy. In his influential 1980 book, Man-Eating Myth: Anthropology and Anthropophagy the social anthropologist William Arens argued that stories of cannibal tribes were nothing more than racist, imperialist fantasies. Recently, though, substantial empirical evidence has been accumulated for the relative frequency of cannibalism in premodern societies. Notable among this work is Tim White’s archaeological study of anthropophagy among the Anasazi of Southwestern Colorado in the twelfth century. More recently, Simon Mead and a team of researchers have made the case on the basis of genetic analysis that epidemics of prion diseases plagued prehistoric humans and were spread through cannibalistic feasting, in much the same way that BSE spreads among cattle.

In the modern era, frequent reports of cannibalism connected with both warfare and traditional medicine come from both natives and visitors in sub-Saharan Africa. Daniel Bergner reported in the New York Times that “in May [2003], two United Nations military observers stationed in northeastern Congo at an outpost near Bunia, a town not far from Beni, were killed by a local tribal militia. The peacekeepers’ bodies were split open and their hearts, livers and testicles taken – common signs of cannibalism.” One of Bergner’s informants, a Nande tribesman, recounts what happened when he was taken prisoner by soldiers from the Movement for the Liberation of Congo:

“One of his squad hacked up the body. The commander gave Kakule [the informant] his knife, told him to pare the skin from an arm, a leg. He told Kakule and his other assistant to build a fire. From their satchels, the soldiers brought cassava bread. They sat in a circle. The commander placed the dead man’s head at the center. He forced the two loggers to sit with them, to eat with them the pieces of boiled limb. The grilled liver, tongue and genitals had already been parceled out among the commander and his troops.”

Bergner notes that it is a widespread, and commonly acknowledged belief in the region that eating the flesh, and especially the organs, of one’s enemy is a way to enhance one’s own power. This practice is sufficiently documented to have been accepted as fact by both the U. N. high commissioner for human rights as well as Amnesty International.

Cannibalism has been observed in over seventy mammal species, including chimpanzees. The hypothesis that cannibalism is common to all carnivorous species, or that this is something of which all carnivores are capable under certain circumstances, does not seem implausible. If one were to argue that these recent reports are fabrications, and that its modern disappearance in our own species has something to do with ethical progress, surely sufficient counterevidence could be produced from other, even better documented practices to quickly convince all concerned that no progress has been made.

The evidence suggests that, when cannibalism does happen, it is never the result of the fortuitous death of a comrade and the simple need among his survivors for protein. Rather, it follows upon the slaughtering of humans, which is exactly what we would expect, given the human preference for slaughtered pigs and cows over lightning-struck ones. Where eating animals is permitted, there is slaughter. And where slaughtering humans is permitted, the general prohibition on eating them does not necessarily hold.

In short, eating human beings is wrong because murder is wrong, and there’s no way to get edible meat but by slaughtering it. I suppose Stefan Gates could look for a “donor,” who would in case of an untimely death –a car accident, say– dedicate his body to pushing the limits of experimental gastronomy. But if the cook fails to find any willing diners, this may have much more to do with our distaste for roadkill than with respect for the memory of a fellow human.

Monday Musing: Frederica Krueger, Killing Machine

Catty1_1It is a warm, languorous, late-spring day here in New York, and I don’t feel like thinking about anything complicated. So, I’m just going to tell you a cat story today.

A couple of months ago, my wife Margit’s friend Bailey asked us to look after her cat (really just a kitten) while she was going to be out of town for about ten days. It was decided that the cat would just stay with us during that time. Bailey had only recently found the cat cowering in her basement, half-starved and probably living on the occasional mouse or whatever insects or other small creatures she could find. Bailey hadn’t got around to naming the cat yet, and not wishing to prematurely thrust a real name upon her, we just called her Catty while she stayed with us. We thought she must be about six months old at that time, but she was quite tiny. Catty, to put it kindly, turned out to be a more ferociously mischievous cat than I had ever seen before. She did not like to be petted, and shunned all forms of affection. This, however, should by no means lead you to infer that our interactions with Catty were limited or sparse. Not at all: we were continuously stalked and hunted by her. I may not know what it is like to be a bat, but thanks to Catty, I have a pretty good idea what it is like to be an antelope in the Serengeti! [Photo shows Catty when she first came to stay with us.]

250pxfreddykCatty wanted to do nothing but eat and hunt. Any movement or sound would send her into a crouching tiger position, ears pinned back, tail twitching. Though she is very fast, her real weapon is stealth. (Yes, she is quite the hidden dragon, as well.) I’ll be watching TV or reading, and incredibly suddenly I am barely aware of a grayish blur flying through the air toward me from the most unexpected place, and have just enough time to instintively close my eyes protectively before she swats me with a claw. After various attacks on Margit and me which we were completely helpless to prevent, and which left us mauled with scratches everywhere (and I had been worried about cat hair on my clothes making me look bad!), Margit took her to a vet to have her very sharp nails trimmed (we did not have her declawed, which seemed too cruel and irreversible). The vet asked Margit for a name to register her under, and Catty immediately tried to kill him for his impertinence. While he bandaged his injuries, Margit decided to officially name the little slasher Frederica Krueger, thereby openly acknowledging and perhaps even honoring her ineluctably murderous nature. We started calling her Freddy.

Abbas_and_lord_jim_1Here’s the funny thing: despite her fiercely feral, violent tendencies, Freddy was just so beautiful that I fell in love with her. To echo Nabokov’s Humbert Humbert speaking about another famous pubescent nymphet: Ladies and Gentlemen of the Jury, it was she who seduced me! As Freddy got more used to us, it was as if she could not decide whether to try and eat us, or be nice. She started oscillating between the two modes, attacking and then affectionately licking my hand, then attacking again… But it was precisely the graceful, lean, single-minded perfection of her design as a killing machine that I could not resist. Like a Ferrari (only much more impressive), she was clearly built for one thing only, and therein lay her seductive power. (Okay, I admit it, I’ve always liked cats. The photo here shows me sitting on a chimney on the roof of our house in Islamabad in the late 60s with my cat Lord Jim.)

We mostly read whatever psychological intentions we want (and can) into our pets, imputing all sorts of beliefs and desires from our own psychological economies to them, and this works particularly well to the advantage of cats. They are just intelligent enough to get our attention as intentional agents (unlike say, a goldfish, or even a hamster, which seem barely more than the automatons Descartes imagined all animals except humans to be), but the fact that they are very mentally rigid and cannot learn too much makes them seem imperious, haughty, independent, and noble to us, unlike dogs, who are much more flexibl250pxcat_mummy_maske in intelligence and can learn to obey commands and do many tricks to please us. Let’s be blunt: cats are quite stupid. But to be fair, maybe much of the nobility we read into some humans is also the result of their rigidity. Who knows. In any case, cats are such monomaniacally hardwired hunters that it is impossible not to admire their relentless pursuit of prey, even if (in my case!) that prey is us. Since like many gods of the ancients, cats are mostly oblivious to human wishes and impossible to control, it is no surprise that some ancient peoples held them to be gods.

In ancient Egypt cats were considered deities as early as 3000 BCE and later there existed the cult of the goddess Bast, who was originally depicted as a woman with the head of a lioness, but soon changed to an unmistakeably domestic cat. Since cats were considered sacred, they were also mummified. Herodotus reports that when Egyptian cats died, the members of the household that owned it would shave their eyebrows in mourning. Killing a cat, even accidentally was a capital crime. The cult of Bast was officially banned in 390 BCE, but reverence for cats continued. Another greek historian, Diodorus Siculus, relates an incident from about 60 BCE where the wheels of a Roman chariot accidentally crushed an Egyptian cat. An outraged mob immediately killed the soldier driving the chariot.

The domestic cat was named Felis catus by Linnaeus, and like dogs, belong to the order Carnivora. Not all carnivores are in this order (even some spiders are carnivores, after all) and not all members of the Carnivora are carnivores, such as the panda. Other members of this order are bears, weasels, hyenas, seals, walruses, etc. Like our own, the ancestors of the modern domestic cat came from East Africa. Cats were probably initially allowed or encouraged to live near human settlements because they are great for pest control, especially in agricultural settings with grain storage, etc. This arrangement also afforded cats protection from larger predators who stayed away from humans for the most part. Even now, cats will hunt more than a thousand species of small animals. Domestic cats, if left in the wild, will form colonies, and by the way, a group of cats is known as a clowder. (Be sure to throw that into your next cocktail party conversation.)

Ae411aIt took even physicists a while to figure out how a cat always lands on its feet, which is known as its “righting reflex.” The problem is that in mid-air, there is nothing to push off against to change your orientation (imagine being suspended in space outside a rocket, and trying to rotate). So how do they do it? The answer is actually quite technical and has to do with something called a phase shift. (Like a spinning figure skater being able to speed up or slow down her rate of rotation by drawing her arms in or holding them out.) What the cat does is first put its arms out and rotate the front half of its body in one direction and the back half in the opposite direction (a twisting motion), then it draws its arms in and twists in the opposite direction. But because angular momentum must be conserved, and angular momentum depends on the radial distance of mass from its axis of rotation, it will rotate back less this time, thereby achieving a net rotation in the direction of the first twist. If you don’t get it, don’t worry about it!

Cats appear frequently in fiction and writers seem to have a particular predilection for them. Ernest Hemingway and Mark Twain were serial cat-owners. Hemingway at various times had cats named Alley Cat, Boise, Crazy Christian, Dillinger, Ecstasy, F. Puss, Fats, Friendless Brother, Furhouse, Pilar, Skunk, Thruster, Whitehead, and Willy. Twain’s cats were Appolinaris, Beelzebub, Blatherskite, Buffalo Bill, Satan, Sin, Sour Mash, Tammany, and Zoroaster. Meanwhile, Theodore Roosevelt’s cat Tom Quartz was named for a cat in Mark Twain’s Roughing It. T.S. Eliot owned cats named Tantomile, Noilly Prat, Wiscus, Pettipaws, and George Pushdragon. William and Williamina both belonged to Charles Dickens.

Lord Byron and Jorge Luis Borges both had cats named Beppo. (Byron travelled accompanied by five cats.) Edgar Allen Poe had Catarina; Raymond Chandler, Taki. Kingsley Amis’s cat was Sara Snow. Some cats were, of course, named for famous people as well as owned by them, such as Gloria Steinem’s Magritte and Anatole France’s Pascal. John Lennon was the proud owner of Elvis. John Kenneth Galbraith was forced to change his cat’s name from Ahmedabad to Gujarat after he became the U.S. ambassador to India because Muslims were offended by “Ahmed” (one of Mohammad’s names) being associated with a cat. Mohammad himself, according to a report (hadith) attributed to Abu Huraira, owned a cat named Muezza, about whom it is said that one day while she was asleep on the sleeve of Mohammad’s robe, the call to prayer was sounded. Rather than awaken the cat, Mohammad quietly cut his sleeve off and left. When he returned, the cat bowed to him and thanked him, after which she was guaranteed a place in heaven.

Drevil_bigglesworth1Isaac Newton not only loved cats, but is also said (probably apochryphally) to be the inventor of the “cat flap,” allowing his cats to come and go as they pleased. (Wonder how long a break he had to take from inventing, say calculus, to do that.) And by the way, among famous cat haters can be counted such luminaries as Genghis Khan, Alexander the Great, Julius Caesar, Napoleon Bonaparte, Benito Mussolini, and last but not least, Adolf Hitler. What is it about cat-hating that basically turns one into a Dr. Evil? But wait, Dr. Evil likes cats!

Okay, enough random blather. Back to Ms. Frederica Krueger’s story: as the moment of Bailey’s return from her trip and the time for Freddy to leave us approached, I grew more and more agitated, finally threatening Margit that I would kidnap the cat and run away with her unless she did something to stop Bailey from coming to pick up her cat. At first Margit tried to tell me that we could get another cat, which only made me regress further and throw a tantrum yelling, “I don’t want another cat! I only want this cat!” At this point, Margit told me I had finally cracked up completely and advised me to call a shrink. Bailey was coming to get the cat early next morning. I went to bed late, as I often do, and was still asleep when Margit awakened me to say that Bailey had agreed to let us have the cat as it seemed very happy here, and Bailey’s apartment was really too small anyway. Thus Frederica becames ours, and we remain her willing and ever-anxious prey.

Freddy’s Photo Gallery

Here are some glamour and action shots of Ms. Frederica Krueger, which you can click to enlarge. Captions are below the photos:

Fk1_1 Fk4 Fk2

Fk3 Fk5 Fk6

TOP ROW:

  1. I catch Freddy suddenly pouncing on an unsuspecting Margit’s hand from behind our living room sofa (a favorite place of hers from which to launch her demonic attacks). Her eyes reflect the light from the camera flash because of a mirror-like layer behind her retinas called the tapetum. Nocturnal animals have this reflective surface there to bounce photons back toward the photosensitive cells of the retina, thereby almost doubling the chance that they will be registered, and greatly improving the animal’s night vision. The daytime vision of cats is not as good as humans, however.
  2. She is striking a deceptively demure pose. Don’t let if fool you. I have paid dearly for that mistake. In blood.
  3. Freddy loves this incredibly silly toy, which is basically just a little felt mouse that goes around and around, driven by a battery-powered motor. She spends inordinate amounts of time and energy trying to slay this patently fake rodent.

BOTTOM ROW:

  1. Freddy has a habit of sitting on various bookshelves in the apartment, usually at a greater height than in this picture, surveying the scene below, much like a vulture.
  2. Margit too-bravely holds Freddy in her lap, who is only milli-seconds away from trying to shred Margit’s hands with the claws of her powerful rear legs.
  3. If you didn’t believe me when I said that often all I see is a grayish blur flying at me, have a good look at this picture (enlarge it by clicking on it) taken at 1/8th of a second shutter speed. Freddy is jumping from a lower bookshelf to the shelp avove the stereo on the right, so she can climb to even higher shelves along that wall.

Have a good week!  My other Monday Musing columns can be seen here.

Monday, May 22, 2006

Monday Musing: Modern Myths

I’ve spent the last two months binge watching nearly every season of Buffy the Vampire Slayer. Season 6 sadly got sent to a different address, forcing me to wait and watch the series out of order. I’ve seen every episode at least a few times, so no surprise is ruined. (Since I’ve bought my DVD sets, I’ve watched a few episodes a few times.) There is a certain satisfaction to watching the series of out sync, something akin to looking through photo album and remembering your life out of order.

Watching these episodes, I find myself more caught up in the world of BtVS. I certainly need more that the 7 seasons. I’ve found myself reading though the whedonwiki (after Joss Whedon, the creator of BtVS and the series Angel), hyperlinked episode guides, but mostly a lot of fan fiction.

Fan fiction as a genre is fairly well examined, although there are plenty of debates about what counts as fan fiction. Satire or works such as The Wide Sargasso Sea don’t really seem to cut it. The earliest clear instance of fan fiction may be Sherlock Holmes related stories. Apparently after Conan Doyle killed Holmes off in 1893, fans of the detective wrote tales of the Baker Street Irregulars, the street urchins that Holmes and Watson would turn to for information. It’s at least a century old. Contemporary fan fiction seems to have really taken off with Star Trek.

The effect of a work of fan fiction is simple. The fan of a television show, comic book, movie, etc. becomes a producer of the stories set in these worlds and not merely a consumer of them. That’s pretty straightforward. Another effect is that the storylines spin out of control, series become inconsistent and characters’ personalities follow arcs that seem at odds with that of the originals.

Fan fiction is not the only genre to suffer from inconsistencies. The other genre with similar problems, perhaps virtues is comics. Apart from a few foundational moments, it’s impossible to tell the history of Batman, for example. Part of this stems from the fact that many stories are written by many writers over the decades since the Batman character first appeared.

Verification becomes a problem in these universes. Being works of fiction we can only look to the texts themselves, and inconsistencies become contradictions when we try to a sense of what happened in a story universe. Movies, cartoon, and video games only compound the problem.

In the case of comic books, there are attempts every so often to try to re-write the history of the hero’s universe. The fact of the contradictions are faced head-on, but with a multi-universe caveat, and some authoritative “smoothing” is carried out. The results seem more confusing than the problem. With fan fiction, the studio or the author declares a canon, with everything outside being non-canonical. Of course, the “canon” is not a legal category, and ultimately it’s left to the community of readers to “decide”, as it were.

Fan fiction and comic books point to two opposing tendencies, one associated with antiquity and the other with modern narratives. At least that was my impression when I started plowing through some of the Buffy fan fiction and was hit with the lists of story synopses that were incompatible with each other. I got that sense largely because of a passage from The Marriage of Cadmus and Harmony, which I’d been reading recently.

Mythical figures live many lives, die many deaths, and in this they differ from the characters we find in novels, who can never go beyond the single gesture. But in each of these lives and death all the others are present, and we can hear their echo. Only when we become aware of a sudden consistency between incompatibles can we say we have crossed the threshold of myth. Abandoned in Naxos, Ariadne was shot dead by Artemis’s arrow; Dionysus ordered the killing and stood watching, motionless. Or: Ariadne hung herself in Naxos, after being left by Theseus. Or: pregnant by Theseus and shipwrecked in Cyprus, she died in childbirth. Or: Dionysus came to Ariadne in Naxos, together with his band of followers; they celebrated a divine marriage, after which she rose into the sky, where we still see her today amid the northern constellations. Or: Dionysus came to Ariadne in Naxos, after which she followed him around on his adventures. Sharing his bed and fighting with his soldiers; when Dionysus attacked Perseus in the country near Argos, Ariadne went with him, armed to fight amid the ranks of the crazed Bacchants, until Perseus shook the face of Medusa in front of her and Ariadne was turned to stone. And there she stayed, a stone in a field.

Only when we become aware of a sudden consistency between incompatibles can we say we have crossed the threshold of myth. And if there’s a sign that Holmes, Batman, Kirk, Picard, Dax, Faith, Spike, the others all crossed the threshold of myth, it may be this in the structure of their narratives and the feeling of consistency between incompatibles that you find reading fan fiction.

Monday, May 15, 2006

Lunar Refractions: in it for the Long Run

Hokusaisketch1_1Last week I had the fortune to see the Hokusai exhibit at the Sackler Gallery in Washington, DC. Hokusai lived to be eighty-nine (or ninety, depending on your calendar), 157 years ago. The show addressed his entire time on earth, from 1760 to 1849, and the work spanned from just after his apprenticeship to his terrestrial end. I cannot say much here about the exhibit, because the work just needs to be seen, but within it were embedded a lot of very timely ideas.

By any Other Name it’s not the Same

“With each major shift in direction of his life and art, Hokusai changed his artistic name….” – introductory panel in the Sackler Gallery exhibit

Some of Hokusai’s names:
1779–1794 Shunro (age nineteen to thirty-four)
1795–1798 Sori (age thirty-five to thirty-eight)
1798–1809 Hokusai, “North [star] studio” (age thirty-eight to forty-nine)
1810–1819 Taito (age fifty to fifty-nine)
1820–1833 Iitsu, “one again,” referring to an auspicious sixty-year cycle (age sixty to seventy-three)
1834–1849 Manji, “10,000” or “eternity” (age seventy-four to eighty-nine or ninety)

This is an approach I think Madonna would agree with (her latest album is fantastic, in that it sounds precisely like her, in that she never sounds the same), even if her particular name is too emblematic to be easily replaced. Every artist, of every sort, works in phases, and marking them—even honoring them—with a special name seems to make perfect sense. This is frequently done for political, whimsical, or other reasons, but usually one name is replaced with one other; rarely does anyone attempt the incessant name-shifting that Hokusai did.

Hokusaisketch2_1This relates to the idea of taking a pseudonym, several times over. Amantine Lucile Aurore Dupin became George Sand. Marie Henri Beyle became Stendhal. Samuel Langhorne Clemens became Mark Twain. Herbert Ernst Karl Frahm became Willy Brandt. Marion Morrison became John Wayne. Charles Édouard Jeanneret became Le Corbusier. Kurt Erich Suckert became Curzio Malaparte. Charles Lutwidge Dodgeson became Lewis Carroll. Benjamin Franklin became (on occasion, and delightfully) Silence Dogood. William Michael Albert Broad became Billy Idol. Stephen Demetre Georgiou became Cat Stevens became Yusuf Islam. Norma Jean Mortensen became Norma Jean Baker became Marilyn Monroe. And who are you?

But I don’t mean to get too sidetracked; Hokusai’s names were often adopted for their significance. I sure hope to see myself as one again if I turn sixty, and at seventy-four I wouldn’t mind if people were to invoke eternity when calling me. What the exhibition didn’t make clear to me was whether people followed Hokusai’s works as his despite the changing names; he’d become quite famous by the name of Hokusai in his late thirties, and I’m unclear as to whether his fans bought works by Taito, Iitsu, and Manji knowing that they were his or not. History has a way of distorting these things. Our own contemporary J. T. Leroy, or whoever, rose to fame by the age of twenty or so, only to have everyone who had previously fawned over him/her/it lose track of the writings amid the identity debate. I enjoyed watching the whole thing, as it proved how important the identity behind a work is to contemporary audiences, to the point of dismissing the work itself if the identity comes into question. Perhaps Leroy wouldn’t have had such trouble if people were more focused on the writing from the start, as opposed to marveling at questions of age, sex, and other eminently consumable trivia.

Moving on, Painting on

In his postscript to One Hundred Views of Mount Fuji, Hokusai gives us a brief sketch of his view of life: “From the time I was six, I was in the habit of sketching things I saw around me, and around the age of fifty, I began to work in earnest, producing numerous designs. It was not until after my seventieth year, however, that I produced anything of significance. At the age of seventy-three, I began to grasp the underlying structure of birds and animals, insects and fish, and the way trees and plants grow. Thus, if I keep up my efforts, I will have even a better understanding when I am eighty, and by ninety will have penetrated to the heart of things. At one hundred, I may reach a level of divine understanding, and if I live a decade beyond that, everything I paint—every dot and line—will be alive. I ask the god of longevity to grant me a life long enough to prove this true.” [translation by Carol Morland]

What I find remarkable here is that he skips straight from the age of six to fifty. There would be little space for him in today’s art world. But he went ahead anyway. In his incessant work he conversed with any- and everything around him: people, animals, rocks, poems, seasons, trades. Amid the dozens of mass-market illustrated books (manga) he published were titles such as Various Moral Teachings for all Time (at age twenty-four) and Women’s Precepts (at age sixty-eight). All that before he produced anything of significance.

Thinking of all the things Hokusai conversed with in his work, it occurred to me that I care about things born before me because they provide such good conversation. I have great difficulty, not to mention a sense of futility, starting anything of my own without a checking previous references and precedents—such context provides meaning. If I do entertain the delusion of working outside of all previously tread paths, I inevitably (and thankfully) come across something that has already achieved (many years ago, and better) what I had in mind. I was discussing this with a neighbor of mine who paints, critiques, and writes about art, and it all came down to meaning and conversation, in all senses, across medium and time.

MensaWhich brings me to one of my favorite pieces ever, a table top made between 400 and 600 in Byzantium, now at the Metropolitan Museum of Art. It was probably used to celebrateLekythos feasts held at the grave in honor of the dead. Why do I bring this up? Because for me it is an object that visually embodies the very place of conversation—where people gather, meet, often eat or drink, and listen and talk. Next door to this are a bunch of terracotta pieces I’d always grouped with the famous red- and black-figure vessels, but had preferred over the others solely for their white ground. Strolling by them last year with a friend from Greece, I mentioned my favorites, and she replied, “oh, of course, the funerary lekythos.” The “of course” threw me off, since my ignorance had placed them on the wine- and water-bearing Dionysian level of all the others, but I was quickly told they held oil, and were always found in tombs. Looking closer, they all feature scenes of parting or visitation between mourners and the dead. No wonder I found their serene beauty enchanting.

HermeslekythosAncient Greek culture popped up again last weekend, in the most unexpected place. I was at Doug Aitken’s Broken Screen Happening at 80 Essex Street, sponsored by Hermès and Creativetime, where I was somehow admitted despite not being nearly cool enough, judging from the crowd. The highlight of the evening was when musAdamgreenician Adam Green  thanked Hermes, aptly pronouncing it like the Greek god of boundaries and travelers who cross them, as well as orators, literature, poets, commerce, and a bunch of other things—as opposed to the French god of handbags. So many people were talking over the performer that it was difficult to hear. Hermes also acts as translator and messenger between the gods and humans. All of it was just too perfect. Though Hokusai wasn’t granted all the time he wished for, he certainly made the most of what he was given, and I’m sure he and Hermes are having a grand old time giving us hints about ideas we think are our own.

The Hokusai exhibit closed yesterday. All things come to an end eventually.

[In memory of STR and JMD].

Monday, May 8, 2006

Below the Fold: Inequality in a Predatory World

We live in a predatory world. The poor, the helpless, or simply the less well off find themselves dehumanized and victimized around the world. They are often defenseless against the degradation and violence visited upon them by the better off, or by the states the better off control. Without economic equality, human well being, a life rich in the possibilities of self-fulfillment, is impossible. Without economic equality, any gains in achieving full citizenship including racial, gender and political equality are unsustainable.

Indeed, quite the opposite occurs routinely. Disadvantage awakens in the advantaged a desire for gain at the expense of others, even a desire for conquest over others less powerful. The English philosopher John Hobbes argued that when people found themselves in a state of equality, their gnawing fear of losing their status would transform their society into a war of all against all. The world’s rich are showing that Hobbes, if anything, under-estimated the power of circumstances. Even overwhelming economic superiority does not quiet the fear of losing. As the saying goes, you can never be too rich, but for reasons the society wags never fathomed. For those who have it all, they never have enough. Instead it quickens their desire for more. It also arouses in them a need to dominate and degrade the disadvantaged masses beneath them. They enact their sovereignty by violating the dispossessed. The rich become what Hobbes believed the sovereign must become — a monstrous Leviathan capable of instilling shock, awe, and death, this time among the world’s poor.

Perhaps only the cynical Manicheans trying to run the world from the White House understand this need for the Leviathan. The endless desire for more wealth, the fear of the poor from whom the wealth is extracted, and the need to make the masses stand in fear suggest a reason for our period’s particular cruelty. Endless wars, mass annihilations, horrific tortures, barbaric incarcerations, and above all a policy of lawlessness are Leviathan’s means. Its works produce grisly as well as material satisfactions for the rich and a ghastly theater of violence and subjection for the rest.

I argue that the more economically unequal as a world we become, the more an inferno our lives will become. Liberal intellectuals and policy makers, or perhaps one should say the rest of the world ruling elite, seem inured to the relationship between growing inequality and growing inhumanity. Instead of demanding economic equality, they focus on poverty reduction, hoping that reducing poverty will make a dent in economic inequality. Perhaps cynically too, they hope that modest improvements in living standards will dampen popular resistance to the rule of the rich, to which they, though less than the rich themselves, are acclimated.

“Let us abandon the fight against inequality,” writes Foreign Policy editor Moises Naim in a recent Financial Times op-ed. “Let us stop fighting a battle we cannot win and concentrate all efforts on a fight that can succeed. The best tools to achieve a long-term, sustained decline in inequality are the same as those that are now widely accepted as the best available levers to lift people out of poverty.” By fighting poverty through health, education, jobs and housing, Naim argues, we will wear inequality down.

Naim expresses, albeit from the liberal side, the consensus view of the rich-country development community, the World Bank, and an international effort such as the UN Millennium Project. Poverty reduction is the goal because it is achievable, and it is saleable as a strategy precisely because poverty reduction does not call for a redistribution of world resources. Thus, liberals, either naïve or too mindful of the Leviathan, content themselves with lifting up the abject. They either do not countenance or reject outright liberating the dispossessed from subjection.

The trouble with the liberal position, though very different from Manichean murder and terror, is that it is rather wishful, and it ignores rather well established facts. Eliminating poverty does not achieve equality, and it doesn’t take a Nobel-winning economist to show it. The United States hit its lowest historical level of economic inequality in 1968, a time of great prosperity and government intervention to eliminate poverty. The level we reached then was equivalent to the economic inequality we would find in many poor countries today, which is to say a pretty abysmal level. Note too that the good times of the Clinton era and the recent recovery during the Bush regime have not stopped economic inequality from growing. In fact, inequality in America has been accelerating, not slowing.

Economic growth alone does not eliminate poverty. Many economists forecast that it will take China, even at its remarkable rate of economic growth, almost 30 years to eliminate dire poverty, leaving a massive job of lifting another up to half a billion people out of three to four dollar a day poverty. Perhaps cognizant of this, the Chinese state is taking dramatic steps to redistribute income to the rural peasantry, eliminating land taxes, providing free public education, and rebuilding a rural health system. Yet, even as Chinese poverty will prove a difficult problem to solve, a middle class will be living at the level of the today’s Korean middle class, and the great wave of capitalist development will have created a massive new generation of the truly, world-level wealthy. Inequality will get worse, and one can only wish good luck to the Chinese peasants.

The first lesson here is that economic growth creates the wealthy first, and brings along the masses later – far later than the time necessary to earn their way to equality through labor or enterprise. It happens inside countries like our own. It happens across countries. Consider evidence accumulated by World Bank economist Branko Milanovic that the ratio of inequality, rich country to poor country, has grown from 19 to 1 in 1960 to 37 to 1 in 2000. This is true despite the spread of industrialization, thought to be the holy grail of development, and rising income levels in Asia.

The second lesson is that if you don’t go after economic equality, and settle instead for poverty reduction, there is little prospect that the disadvantaged can hold on to their gains given the predations of the rich. Again, the US is a paradigm case. Even as the rich have gotten richer over the past quarter century, the American state has actually contrived to take back a variety of welfare benefits from the poor. As America’s medium family income has stagnated since the seventies, the poor have become objectively poorer. The state has ignored these facts and refused increases in life support consisting of income supplements, housing assistance, health care, education, and food assistance.

The only solution that will work, whether at the national or the international level, is redistribution of the wealth. The rich must be made poorer and the poorer their equals, if the goal is a modicum of well being for all.

We know how to do this at the national level, and again the evidence for its success is widely known. Taxes work. Not only did they increase equality in America starting with World War I and beginning again during the New Deal, but inequality increased as taxation radically declined starting with the Reagan Administration in 1981.

At the international level, how to proceed is less certain, given that no international body possesses the means to compel peoples via their states to contribute tax monies to the common good of all. The amounts necessary to raise are not hard to calculate. We are masters of calculation in this age. Currently, rich countries cannot even come up with 1% of their Gross Domestic Product in transfer payments to poor countries, a figure once considered the minimum moral response to global destitution. Despite six years of posturing about supporting the UN Millennium initiative to eliminate much of less than a dollar-a-day poverty worldwide, rich country support is declining rather than increasing. It is important to put redistribution at the top of the global agenda rather than engage in the bait and switch of poverty reduction.

Economic equality requires an obviously enormous and lasting redistribution of wealth worldwide. Yet someone once calculated that there is US$5000 in wealth for every person on the planet, the equivalent of the Gross Domestic Product of Uruguay. Imagine the world as a big Uruguay. Things could be worse: people in Uruguay live as long as Americans do, their child mortality rate is even with ours, and less than 4% of their children suffer malnutrition.

The beaches are beautiful, Montevideo is a dream, and no one expects an Uruguayan invasion of Iran any time soon.

Teaser Appetizer: The Adipose American, A Few Facts

Evolutionary pressures banish unfit biological species into extinction. The American descendents of Homo sapiens species will explode into extinction at the midriff. A walk through Main Street, USA will convince any skeptic of the veracity of this prediction. And it will all happen due to the adipose state of the nation.

Fact: 65% of US population is either overweight or obese.
Fact: The number of obese Americans zoomed from 14.5% in 1976 to 30.5% in 2000.

Millions of Americans are obese, diabetic, hypertensive, hyperlipidemic and succumb to this murderous metabolic syndrome. Strokes, heart attacks, fatty liver, osteoporosis, cancer, depression, arthritis and sleep apnea ravage the obese. The chart below, reproduced from Baylor College of medicine, depicts all havoc unleashed by obesity:

Screenhunter_8_2

We have an epidemic. We spend $117 billion directly or indirectly on obesity and its complications; we eat more, exercise less and our bodies have become a battleground of conflicting hormones and peptides.

We thought our loads of fat were meant only for aesthetic shame but in 1994 scientists told us that the adipose tissue is an endocrine organ! Yes, an endocrine organ, similar to thyroid and adrenal gland. Like them it secrets into blood, a hormone — in this case, Leptin — which travels to remotely located hypothalamus and suppresses appetite.

In reverse, lack of Leptin stimulates appetite, encourages over eating, thus increasing fat storage. (This probably rendered an evolutionary advantage to help store a reservoir of fat for lean days of starvation.) Leptin deficient mice due to gene depletion (ob-ob mice) are obese and leptin replacement cures their obesity. Leptin gene deficiency and obesity is rare in humans and improves with leptin therapy.

Corollary: if leptin were administered to obese people they should loose weight. So the investigators tried it but only with partial success. It so happens that obese people have high – not low – levels of leptin. Their cells lack the receptors for leptin to attach and are resistant to leptin therapy. Thus, obese are either leptin deficient or leptin receptor deficient.

Leptin is not the only attention grabber; ghrelin entered the stage in 1999. Stomach secrets grehlin in response to hunger; a hungry man has high grehlin. The circulating grehlin stimulates the satiety center in the hypothalamus and grehlin secretion stops. A satiated man has low grehlin. Now add to this complexity insulin, cholecystokinin and GLP-1 Low insulin levels stimulate hunger and initiate the act of eating. The fat and probably protein in the meal stimulates cholecystokinin secretion from the upper small bowel which suppresses appetite and slows gastric emptying causing fullness and satiation. The act of eating stops. GLP-1 oozes out of the lower small bowel to suppress the appetite further.

But that is not all; in science it gets complex before it get simple. See the diagram below reproduced from adipose society of Baylor College of medicine:

Screenhunter_9_1

Adiposity is regulated by a set of short and long term signals. Those for the short term determine the size of a meal and it frequency; the long term signals determine the fat storage.

The mechanism of appetite regulation and fat deposit is an interaction of competing and feed back signals. The following are some of the mediators:

  1. Neurotransmitters in the hypothalamus, like NPY, AGR,5HT
  2. Gut hormones like leptin , grehlin , cholecystokinin and GLP1
  3. Other circulating hormones like cortisol and thyroxine
  4. Sensory input from stomach and intestines
  5. External input like smell, taste and emotions

Currently the US obesity hormones are in state of misalignment: the USA is a leptin resistant, ghrelin deficient, cholecystokinin inefficient and insulin abundant nation.

And we still don’t know which molecule is the master conductor of this orchestra and how to transform this cacophony into harmony. It is obvious that the mechanism of appetite regulation and fat deposition is complex which leads to general failure of any single mode of therapy. Unrealistic individual goals of weight reduction further thwart the success. The therapy of obesity must include a combination of the following:

  1. Eat less: A daily deficit of 500 to 1000 calories is reasonable. This is the single most important component of therapy and most difficult to adhere to.
  2. Exercise a lot: Strenuous aerobic activity for over 200 minutes per week maintained over a long period of time with calorie restriction is effective. Physical activity conserves fat free mass, improves glucose tolerance and lipid profile. Fact: Moderate exercise like walking 45 minutes a day for 5 days a week has minimal effect on weight loss.
  3. Modify behavior to avoid temptation to engorge on food. This warrants life style change and altering emotional response to food. Self monitoring and social support are essential.
  4. Use drug therapy: Only two drugs have been approved by FDA for long term therapy.
    • Sibutaramine causes anorexia by blocking neuronal monoamine uptake.
    • Orlistat decreases fat absorption
  5. Get surgery if morbidly obese and nothing else helps.
    • Gastric bypass to channel food directly into mid intestine thus decreasing absorption
    • Gastric banding and stapling to diminish the size of the stomach
    • Combination of bypass and stomach size reduction.

Fact: Even moderate weight loss of 5% decreases the complications significantly

The prescription of eat- less-exercise- more-modify- behavior is still the best choice but compliance has been pathetic. On average, a person on a weight reduction diet has tried and failed three to six other diets before. This failure has created an enormous market opportunity for fad diet authors and manufacturers. Some examples:

  1. Eat less carbohydrates ( Atkins, South beach)
  2. Eat less fats ( Ornish, Pritikin)
  3. Eat less of both ( Weight Watchers, Jennie Craig)
  4. Eat very low calorie diet:400 calories ( Optifast, Cambridge)

The failure has also challenged the scientists to discover new therapies and many new drugs are in the various stages of development. One exciting possibility is the recent understanding of the endocannabinoid (endogenous cannabis like molecules) system. When investigators were working to understand the molecular action of Cannabis Sativa they found cannabinoid receptors (CB1) in the central nervous system and in the adipose tissues. Stimulation of CB1 in the brain increases appetite and stimulation in the fat cells increases fat deposition. It seems this system is in perpetual overactive drive in the obese and blockage of the receptors decreases appetite and promotes weight loss. Rimonabant, a drug now in clinical trials blocks the CB1 receptors and will be an exciting new weapon to combat obesity.

Fat20cat_2Many other drugs are under development but only an accurate understanding of the mechanism of obesity will lead to a better therapy. Science travels from metaphorical to mathematical; the journey is both exciting and agonizing. The investigation meanders, looses way, finds it again, and races to the next stop, falters, sprints and trundles along with hope towards exhilarating simplicity and elegance. The investigation of obesity is scurrying through the difficult middle stretch at present. We better arrive soon or the speed of decline of the American civilization will be directly proportional to the rate of expansion of its girth.

The sobering fact is:

I think and breathe and live because I eat
I eat therefore I am
But soon I will not be,
Because I ate.

Random Walks: Narnia, Schmarnia

[Author’s Note: Some of you may have received an earlier, unfinished version of this particular column. It was not, as one reader suggested, an avant-garde literary choice — Behold! The Half-Finished Post! — but a sad case of an inexperienced blogger accidentally hitting “Publish Now” when she really meant to save it in “Draft” mode. Really, it’s a miracle she is allowed to blog at all. But she promises to never do it again.]

C.S. Lewis’ Chronicles of Narnia have long enjoyed enormous popularity among readers of all ages, particularly among those with Christian leanings. That’s not surprising, since Lewis was himself an avowed Christian and made no bones about the fact that the series was intended as a reworking of the traditional Christian “myth” (and I use that term in the literary sense). But it’s not obvious to everyone, as I discovered when a friend of mine recently went to see the much-anticipated film version of The Lion, the Witch and the Wardrobe. A staunch agnostic, she was horrified to find that somehow, in the translation to the silver screen, the subtleties of Lewis’ mythical retelling were lost, leading to what she considered to be little more than a ham-fisted, didactic advertisement for the Christian religion.

My friend is not alone in her objections to the film (I share them) — indeed, it is a common refrain when discussing Lewis’ literary output. There are many people who view Lewis with suspicion, precisely because he has been so warmly embraced by evangelical Christians. And in the case of bestselling children’s author Philip Pullman, author of the His Dark Materials trilogy (a wonderful read in its own right), suspicion gives way to outright hostility. Pullman is among Lewis’ most outspoken critics, clearly evidenced by a 1998 article in The Guardian, in which he dismisses the Narnia books as “one of the most ugly and poisonous things I’ve ever read.” More recently, he dismissed his rival’s work as being “blatantly racist,” “monumentally disparaging of women,” and blatant Christian propaganda in remarks at the 2002 Guardian Hay festival. (Pullman in turn has been unjustly attacked by right-wing naysayers as “the most dangerous author in Britain” and “semi-Satanic”; he is, in many respects, the anti-Lewis.)

Pullman has made some valid points in his public comments about Lewis and the Narnia chronicles. In addition to his avowed Christianity, Lewis was a conservative product of his era, with all its recumbent prejudices. And he was not, by any means, “nice,” possessing a flinty,  intellectually stringent, sometimes slightly bullying disposition that didn’t always win friends and influence people. Lewis did not suffer fools gladly, if at all. I doubt many of the evangelical Christians who deify Lewis today would have much cared for him in person, and vice versa. Yet he was hardly evil incarnate. I am not a diehard fan of Lewis’ work, but I will be so bold as to suggest that the truth lies somewhere in between the two extremes of beloved saint and recalcitrant sinner. Lewis was a man, plain and simple, with all the usual strengths and foibles.Cslsmoking

As for the charge of Lewis’ work being blatant Christian propaganda, Pullman somewhat over-states the case. Certainly Lewis deliberately evoked the themes and symbols of the Christian mythology in much of his writing, but so did many of the greatest writers in Western literature: Dante, Milton, and Donne, to name just a few. The problem lies not with the choice of themes, but with Lewis’ decidedly heavy-handed style. In his hands, the subtle symbolism of myth more often than not devolved into  overly-simplistic allegory — a far less satisfying approach, artistically.

Lewis certainly understood the power of myth. He’d been fascinated with mythology since his childhood, particularly the Norse myths, and within those, relished the story of Balder the Beautiful, struck down by an errant arrow as a result of the meddlesome Loki. Balder is the Christ figure of the North. Norse mythology was an enthusiasm Lewis shared with J.R.R. Tolkien when the two men met at Oxford in the 1930s. (If nothing else, we may owe The Lord of the Rings trilogy in part to Lewis, who was the first to read early drafts of Tolkien’s imagined world and who encouraged his friend. Tolkien himself later credited Lewis with “an unpayable debt” for convincing him the “stuff” could be more than a “private hobby.”) Along with several other Oxford-based writers and scholars, they began meeting regularly at a local pub called The Eagle and Child, fondly dubbed The Bird and the Baby.

The Oxford Inklings, as they came to be called, were arguably the literary mythmakers of the mid-20th century, at least in England. In addition to Lewis and Tolkien, the group included the lesser-known Charles Williams, who penned fantastical tales in which, for example, the symbolism of the Major Arcana in the traditional tarot deck becomes manifest (The Greater Trumps), while the Earth is invaded not by aliens from outer space, but by the Platonic Ideal Forms (The Place of the Lion). The Platonic Lion featured in the latter may have influenced Lewis’ choice of that animal to represent his Narnia Christ figure, Aslan.

Ironically, it was Lewis’ love of myth that eventually led to his conversion. He was a notoriously prickly atheist for much of his early academic career; in fact, he was as dogmatic about his atheism as he was later about his Christian beliefs, so if nothing else, the man was consistent in his character.  He was also rigorously trained in logic, thanks to an early tutor, W.T. Kirkpatrick. An anecdote related in Humphrey Carpenter’s book, The Inklings, tells of Lewis’ first meeting with Kirkpatrick. Disembarking onto the train platform in Surrey, England, Lewis sought to make small talk by remarking that the countryside was more wild than he’d expected. Kirkpatrick pounced on this innocuous observation and led his new student through a barrage of questions and challenges to his assumptions, concluding, “Do you not see that your remark was meaningless?”

As Carpenter writes, the young Lewis thereby “learned to phrase all remarks as logical propositions, and to defend his opinions by argument.” Among the irrational concepts Lewis rejected was belief in God, or any religion, writing to his Belfast friend Arthur Greeves, “I believe in no religion. There is absolutely no proof for any of them, and from a philosophical standpoint Christianity is not even the best. All religions, that is, all mythologies… are merely man’s own invention.” For Lewis, Christianity was merely “one mythology among many.”

Personally, I’m inclined to agree with the young Lewis on that point (although I, too, have an affinity for myths both ancient and modern); it’s a shame he lost that rigorous clarity later on. I disagree with his early rejection of the thrill of imagination; he insisted it must be kept “strictly separate from the rational.” So what changed? That’s not entirely clear. Over a period of several years, Lewis learned to embrace his childhood love of myth and story, particularly the emotional sensation he called “Joy,” which would come to symbolize, for him, the divine, in the form of the Christian god.  Through long discussions with Tolkien and another Oxford colleague, Owen Barfield (ironically, a fellow atheist, albeit one who propounded the story-telling power of myth), he changed his tune. Tolkien in particular played a role, convincing him that the Christ story was the “true” version of the age-old “dying god” motif in mythology — familiar to anyone who has read Joseph Campbell’s compelling The Voyage of the Hero — but unlike, say, the story of Balder, Tolkein maintained that the Christ myth brought with it “a precise location in history and definite historical consequences.” It was myth become fact, yet still “retaining the character of myth,” as Carpenter tells it.

My problem is not with Lewis’ acceptance of the view that Christianity is rooted in the ancient “dying god” mythology; that should be patently obvious to lovers of story and myth. But it takes a certain special kind of arrogance to assume that, out of all the versions of this prevailing myth that have been told throughout the ages, the one of Jesus is the only “true” one. Lewis was too rigorous a logician not to realize this, and correctly concluded the point was logically unprovable. At some point, he chose to ignore his lingering misgivings and make a leap of faith. That is why they call it faith, after all. Lewis knew his Dante; he recognized that cold hard logic (personified in The Divine Comedy by the poet Virgil) could only lead him to Purgatory, not Paradise. But he hadn’t yet found his Beatrice.  He took that leap of faith anyway, which might be why he became so dogmatic about his adopted religion: he knew he was on logically shaky ground, just as his earlier atheistic foundation was shaken by his love for myth and the experience of “Joy.”

However enriching Lewis may have found his faith personally, I (and many others) would argue that his writing suffered for it. He was hardly a slouch in the writing department, but he lacked the subtlety and complexity of his friends Tolkien and Williams. His innate Christian bias seeped into everything he produced. Since he was a medievalist, this was less of a problem for his scholarly criticism, because the great works from that period in literary history are firmly rooted in the Christian tradition. But the didacticism hurt his fiction. Even Tolkien, a fellow believer, found the Narnia chronicles distasteful in their cavalier, overly-literal approach to mythology, announcing, “It simply won’t do, you know!”

Nonetheless, there are bright spots. Lewis’ science fiction trilogy (Out of the Silent Planet, Perelandra and That Hideous Strength) owes as much to the conventions of medieval literature as it does to his Christian faith. And for those able to look beyond the overtly Christian trappings of The Screwtape Letters, they may find a highly intelligent, perceptive, and mercilessly satirical exposition of human frailty. One can also see shades of Milton’s Paradise Lost in Screwtape’s insistence that Hell’s demons fight with an unfair disadvantage: since all creation is “good,” by virtue of emanating from God — a.k.a., “the Enemy” — everything “must be twisted before it is of any use to us.”

One of my favorite passages in these fictional letters from a senior demon to his nephew, a junior tempter, concerned the sin dubbed “gluttony of delicacy,” or the “All I want…” phenomenon. For instance, the target’s mother has an irritating habit of refusing anything offered to her, for a simple piece of toast and weak tea, rationalizing her finicky behavior with the reassurance that her wishes are quite modest, “smaller and less costly than what has been set before her.” In reality, it cleverly disguises “her determination to get what she wants, however troublesome it may be to others.”

That particular insight — like many of those contained in the book — is just as apt today, with our modern obsession with fad diets. More and more restaurants are tailoring menu items to meet the needs of their customers, whether they’re watching their carbs, cutting down on fat, avoiding meat and dairy, or choosing to subsist entirely on dry toast and weak tea. Starbucks’ entire rationale seems to be affording its customers the ability to order their caffeinated beverage to the most precise specifications. (In that respect, I’m as guilty as the next person. You’ll pry my grande soy chai tea latte from my cold dead fingers before you’ll get me to go back to drinking Folger’s instant coffee or that standard-issue Lipton orange pekoe tea bag. At least offer me the option of selecting a nice darjeeling or Earl Grey blend from Twinings or something. Gluttony of delicacy, indeed.)

But I digress. For all my distaste for Lewis’ Christian didacticism, I forgive all on the merits of just one book: the unjustly ignored novel, Till We Have Faces. It is a mythical retelling of Cupid and Psyche, told from the perspective of the ugly elder sister, Orual, who eventually becomes queen of Lewis’ fictional realm. Despite her role in bringing about her sister’s downfall, Orual is a good queen, and a sympathetic character. But the book ends with a shattering moment of painful self-awareness, when the dying Orual — who has long held a grudge against the gods for their treatment of her — finally has the opportunity come before those gods and read her “complaint,” a book she has been carefully composing over the course of her entire life. It is the mythology she has created of her experience, the story she tells herself, the persona she has created to present to the world. But in the presence of the eternal, she realizes that her once-great work is now “a little, shabby, crumpled” parchment, filled not with her usual elegant handwriting, but with “a vile scribble — each stroke mean and yet savage.” This is her true self, her true voice, stripped of all the delusions and lies she has been hiding behind all those years.

Lewis is unflinching in his depiction of Orual’s metaphorical “unveiling.” And therein lies the novel’s lasting power. Narnia, Schmarnia; those books are highly over-rated. For once, Lewis achieved the essence of myth without lapsing into the cheap  didacticism that characterizes so much of his overtly Christian writing. Why hasn’t someone made the film version of Till We Have Faces? The same over-arching themes are present, but explored in a richer, far less literal (and less overtly Christian) context. Perhaps it is no coincidence that the novel — which Lewis rightly considered his best work — was written in 1955, after he had met and married Joy Davidman. She was his Beatrice, bringing his faith and understanding of mythology (not to mention himself) to a new, deeper level; everything up to that point had been Purgatory, mere pretense, in comparison. Alas, the marriage was short-lived; Joy succumbed to cancer in 1960, and Lewis wrote a wrenching poem in the days before her death, declaring,

… everything you are was making

My heart into a bridge by which I might get back

From exile, and grow man. And now the bridge is breaking.

Joy’s death precipitated a crisis of faith, and while Lewis weathered it and stubbornly clung to belief, I think it is clear from his later writings that he emerged with a deeper kind of faith, something closer to the spirit of mythology than any blind adherence to, or easy acceptance of, conventional religious dogma. He never quite got all the way to true Paradise; he lost his “bridge” midway. But he got farther in his lifetime than many modern believers who might not be quite as willing to ask the hard questions, nor bring the same rigorous, unflinching logic to bear on their faith. (That spotlight is uncomfortably unforgiving, and few of us can wholly withstand the glare.)

There is much to find objectionable in the life and work of C.S. Lewis, if one doesn’t happen to share his religious (or political, or moral) beliefs. But there is also much to praise. Give the man credit for his insights into what seems to be an innate human need to tell stories that make sense of our existence and give it broader meaning. That longing goes beyond the gods of any specific religion, and this is what lifts Till We Have Faces so far above Lewis’ other work and makes it timeless. Like Orual, Lewis’ entire life was spent weaving a “story,” but in the end it was always the same one, worked, and reworked, until he finally managed to hit the truth of the matter and say what he really meant. As Orual concludes in her moment of realization, “I saw well why the gods do not speak to us openly, nor let us answer. Till that word can be dug out of us, why should they hear the babble that we think we mean? How can they meet us face to face, till we have faces?”

When not taking random walks on 3 Quarks Daily, Jennifer Ouellette waxes whimsical on various aspects of science and culture on her own blog, Cocktail Party Physics.

The Best Poems Ever

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

The best poems ever  a collection of poetry’s greatest voices edited by Edric S. Mesmer (Scholastic Inc. 2001).

Of course it can’t be anything of the kind. However, it is no more foolish than Fade To Grey, which is my imaginary name for various anthologies that come to mind. How could ‘best poems ever’ leave out ‘Sir Gawain and the Green Knight’, Goethe and Auden? Then there is the work chosen. Carl Sandburg’s ‘Buffalo Dusk’ sits next to ‘Ode On A Grecian Urn’, and Gertrude Stein’s ‘A Red Hat’ follows Shakespeare’s Sonnet 130. Where is Gwen Harwood? What happened to Hart Crane and Yeats?

However, strange as the contents may be for someone who knows the history of poetry, I can see where the editor is coming from, since this is a booklet designed for younger readers, and readers new to poetry. From that point of view Best poems ever is interesting, especially for younger teenagers at whom Best poems ever seems mainly to be aimed. This publication raises a very important question: how do you go about teaching poetry? Just as it would be wrong to introduce opera to children with Parsifal, so it would be unwise to try for slabs of Paradise Lost or The Cantos with younger readers, although there are always going to be a few who will take to the unlikeliest reading material like ducks to water. Here there is some real depth, plus some effective set pieces of the kind that appeal to younger readers, plus some banalities. All it requires is the right teacher to inculcate habits of critical reading, which can be done over time. Rush jobs don’t work in education. You have to think in year terms, not days or weeks. I can see how a good teacher could use this little edition—seventy-one pages—to get younger readers motivated. I always think longer recitation pieces work well, none of which are included here—Australian ballads, ‘The Pied Piper of Hamelin’, Alfred Noyes’ ‘The Highwayman’ or Eliot’s Old Possum’s Book of Practical Cats. Children enjoy speaking verse aloud and begin to appreciate what language can achieve in poetic form. That is the main thing—to get children enjoying language.

It is not compromising art to put ‘The Highwayman’ before children. It isn’t one of the world’s great poems, but it is certainly well made, with excellent versification, music and rhyme. There is a poem from the Best poems ever which fills the bill to a degree— ‘When Great Dogs Fight’ by Melvin B. Tolson: ‘He padded through the gate that leaned ajar, / Maneuvered toward the slashing arcs of war, / Then pounced upon the bone; and winging feet / Bore him into the refuge of the street. // A sphinx haunts every age and every zone: / When great dogs fight, the small dog gets a bone.’

This poem would work well in class. Next to it is an extract from Shelley’s ‘To A Skylark’. Already you are on much more difficult ground, but it is still a poem that could be usefully looked at in the classroom. All children relate to birds, the idea of freedom, and escape: ‘In the golden light’ning / Of the sunken sun, / O’er which clouds are bright’ning, / Thou dost float and run, / Like an unbodied joy whose race is just begun.’

There is one poem of Emily Dickinson—’My Life had stood—a Loaded Gun’, which puts you into provocative territory, as Dickinson always does. However, young readers enjoy Dickinson on a certain level, just as they take to Robert Frost immediately. ‘The Road Not Taken’ is the poem used here.

Another thing in this edition’s favour is that it includes an equal number of female and male writers. Amongst the women, apart from Stein and Dickinson, there is Lucille Clifton, Anne Bradstreet, Aphra Behn, Lorna Cervantes, Sylvia Plath, Sor Juana Inés de la Cruz, Phillis Wheatley, H.D., Emily Brontë, Gwendolyn Brooks, Barbara Guest, Christina Rossetti, Edna St. Vincent Millay, Elizabeth I, Angelina Grimké, Elizabeth Browning and Marianne Moore. There is more than a touch of the politically-correct about this selection, and some of the poems aren’t up to much, in my opinion, but at least there’s a consciousness about representation. A teacher can do a great deal with these poems. Preparation for future experience comes readily to hand, as in this poem by Elizabeth I whose opening lines read: ‘The doubt of future foes exiles my present joy, / And wit me warns to shun such snares as threaten mine annoy. / For falsehood now doth flow and subject faith doth ebb, / Which would not be if reason ruled or wisdom weaved the web.’

Millay’s ideas don’t seem very interesting, but, once again, younger readers can relate to ‘My heart is warm with friends I make, / And better friends I’ll not be knowing; / Yet there isn’t a train I wouldn’t take, / No matter where it’s going.’

Each person is going to come up with their own anthology of poems, be it for younger readers, or readers generally. Here, gathering work for use in the classroom is the thing, and that is difficult. It’s no good putting together something the size of, say, the Faber Collected Poems of Ted Hughes, which students can’t be expected to lug around with them. Best poems ever isn’t big enough, but it is portable, and small enough to read on demand.

Older teenagers can get into Owen’s ‘Dulce Et Decorum Est’—there’s plenty of material close to hand for consideration there—or Rilke’s ‘Archaic Torso of Apollo’—a pity to have missed the opportunity to put the original German next to the English translation. ‘Dover Beach’ waits with its sober melancholy. Favourites like Thomas’ ‘Do Not Go Gentle Into That Good Night’ and Elizabeth Browning’s Sonnet 43 are good choices since these are two examples of memorable speech, hard to better, which is why they are rightly famous, ‘best’, if you like.

If I was putting together an anthology for use in schools I would do it differently. For one thing, I think it helps to relate some biography and history to place poems in an historical context. Photos of authors as children, and adults, are good too, so that readers realise poets are no different to them. An editor has to have done some hard thinking for teachers, always hard-pressed for time and harassed by the extraordinary demands made on them. You have to provide some work material concerning the poems chosen, and then at least the teacher can choose to use the associated material, or take the lesson along paths they’ve predetermined.

Well, everyone’s a critic. Best poems ever isn’t that, but it makes a stab in the direction of getting together a collection of poems that could be usefully taught in the classroom. At a time when actual study of poetry seems to be diminishing in lieu of rap songs, film scripts, advertising and text messaging, and when textbooks themselves are fast disappearing from the classroom, that is praiseworthy.

Selected Minor Works: Where’s the Philosophy?!

Justin E. H. Smith

(An extensive archive of Justin Smith’s writing can be found at www.jehsmith.com)

Now that I am a tenured professor of philosophy, and thus may resign from service in my profession’s pep squad without fear of losing my salary, I’m going to come right out and say it: after all this time as a student, and then as a graduate student, and then as a professor of philosophy, I still have absolutely no idea what philosophy is, and therefore what it is I am supposed to be doing.  I do not know what the special competences are that I, qua philosopher, am expected to have. It’s clear that I am expected to say “qua” a lot, and to give off other such social cues through language, gesture, and dress.  But it is that thing that I can do because I am a philosopher that a surgeon, or an archeologist, or a thoughtful sales clerk cannot do, because they are not philosophers, that remains elusive.

Well, one might reply, there’s “critical thinking.” But this is something that, in the ideal situation, any active participant in the civic life of a free society would be able to employ in reading the newpaper, listening to the speeches of politicians, etc.  There’s formal logic, but if I agree with Heidegger on anything it is that logic, like shortpants, is for schoolboys.  In the good old days, when one learned anything at all at school, one learned the forms of argumentation, the fallacies together with their Latin names, etc.  This is all really just advanced critical thinking, and if I can see that q follows from p on a symbol-dense page, I still don’t believe that counts as knowing anything.  As Wittgenstein said, everything is left the same.  Finally, of course, there’s the stuff about God and the soul, which used to be the stock-in-trade of philosophy and which philosophy still can’t really dispense with, in spite of its general awkwardness around the topics.  There I am certainly as ignorant as every other human being is and always has been.

I do have some special competences.  For instance, I know how to read Latin, and I use this in my research.  But that doesn’t count as a distinctively philosophical competence, since I could be employing it to read the Pope’s encyclicals, and those sure as hell aren’t philosophy. Some people, unlike me, claim to have distinctively philosophical competences.  ‘Round hiring season, one hears quite a bit about these from young Ph.D.’s without jobs. When philosophy departments run ads in the professional publication for new hires, they ask for candidates with competences in “philosophy of mind,” “philosophy of science,” etc.  I’ve even seen “philosophy of sports and leisure.”  When the candidates come for their interviews, they are asked: “Can you do philosophy of mind?”  And they had better reply: “Oh yes I can. I can do philosophy of mind.”  And then the hopeful young things will go on to list all the other varieties of philosophy they can “do.” Doing these is crucial. These days, one “does” philosophy, and one does not “philosophize.”  Eager young grad students have now sprouted up throughout America who innocently speak of “rolling up their sleeves” and “doing some philosophy” as if this were a group activity facilitated by a hackey-sack or a waterbong.

Now I’ve read countless books filed under “philosophy.”  I’ve thought about what these books have to say, and I’ve written as much as I’ve been able in response.  But I don’t remember ever having “done” philosophy.  I don’t think I belong to the same world as one capable of saying that.

The question lingers, though: is a specialist in “philosophy of mind” comparable to an organic chemist or an archeologist of neolithic burial mounds with respect to some specialized body of expert knowledge?  Perhaps, but this is still not some body of expert knowledge that every philosopher, qua philosopher (there’s that “qua” again), must have, since as I have already said I am a tenured philosopher and I have only an inkling of an idea about it.  It is not that I am not interested in it.  I am about as interested in it as I am in organic chemistry, and rather less than I am in neolithic burial mounds. And, well, vita brevis

So then why not just say that having expert knowledge in philosophy of mind is a sufficient but not necessary condition for being a philosopher, and that there is a cluster of such bodies of expert knowledge, with family resemblances between them, and that is what makes up philosophy?  There are a few problems with this approach.  One is that the millions of scruffy undergraduates cannot be entirely wrong when they see a page of, e.g., Jerry Fodor’s “A Theory of Content” and think to themselves: that’s not philosophy!  The kids want Dasein, and will-to-power, and différance, and other stuff they can be sure they won’t understand.  I am not saying that curriculum decisions should be turned over to the students. That would be a disaster.  But Richard Rorty is at least right to say that what philosophy departments offer fails largely to live up to the sense that newcomers have that the discipline ought to be doing something rather more, well, important.  Another problem with the family-resemblance approach is that there simply are no traits that occur regularly throughout the various subdisciplines. We cannot be a family if it’s not even clear that we’re the same species.

Again, the only common threads seem to be sociological, rather than doctrinal.  We recognize each other by our ability to rattle off the names of philosophy professors who have become major public personalities; to note “where they’re at” now, Harvard, Oxford, etc.; perhaps to mention that we’ve heard how much they get paid.  Reading Brian Leiter’s “Gourmet Report” is particularly helpful for generating this sense of cohesion, and anyone aspiring to join the club would do well to study it.  Learn the cues.  Get remarked –to use Pierre Bourdieu’s sardonic term to describe the autoreproduction of homo academicus— by someone who’s been remarked in the Gourmet Report, and you’re well on your way to being a remarkable philosopher.

The long war between the “analytic” and “continental” philosophers, too, has more to do with the sociology of groups than with beliefs. “Continental” philosophers go to their own conferences, where they tend to pick up the same speech habits, even the same distinctive North American Continental Philosophy accent.  They tend to say “imbricate” a lot, which sounds a good deal more precious in English than it does in the French from which it is lifted, but the majority of “continental” philosophers do not speak French.  Analytic philosophers have moved over the past few decades from a demand for “rigor” to an interest in being, like Donald Davidson, “charitable.”  They have also gone more postmodern than they like to imagine, and nowadays before they claim anything, in writing or in conference, they describe to you the “story” they’re about to “tell.”

There is also professional humor, of course, as an important factor in giving philosophers a feeling of belonging to a community.  For the most part, though, it is about as funny as the slogans accompanying images of cats wearing sunglasses that one often find in secretaries’ cubicles.  It is palliative, occupational humor, like Dilbert, or like a bumpersticker on a union van that reads “Electricians Conduit Better”: a futile effort to overcome the poverty of a life that has been reduced to and identified with the career that sustains it.

But clearly there’s some common ground that is truly philosophical, isn’t there?  Brains in vats?  Moral dilemmas involving railway switching stations?  These topics do come up, but I must say I think about them as little as possible.  My own work is on the problem of sexual generation as it was understood and debated by what used to be called “natural philosophers” in the period extending from roughly 1550 to 1700.  No brains in vats, not even any trains, let alone switching stations.

Recently, most of my reading has consisted in 16th-century botanical compendia, or, as the Germans call them, Kräuterbücher. I am permitted to work on this topic, as a philosopher, because as a matter of historical fact many of the people who cared in the period about the topic that interests me today happen to be recognized, today, as “philosophers”: Descartes, Leibniz, and so on.  Thank God for them. Their shout-outs to, e.g., Antoni van Leeuwenhoek, who did not go down in history as a philosopher, permit me, as a philosopher, to read his work on the microscopical study of fleas’ legs and on the composition of semen.  And he’s fascinating. 

What used to be called “natural philosophy” and has since been parted out into the various science departments is, in general, fascinating.  It asks whether frogs emerge de novo from slime, and whether astral influx is responsible for the growth of crystals.  I know in advance the answer to both of these questions, but I can’t shake the feeling that reading these texts, and witnessing their authors struggling with these questions, is more edifying, and more important, than  seeking to solve the problems that happen to be on the current disciplinary agenda.

Of course, as Steven Shapin –that truly brilliant outside observer of philosophy’s “doings”– has said, anti-philosophy, like philosophy, is the business of the philosophers.  Periodically, after a long spell of failed system-building and bottom-heavy foundationalism, some guy comes along with a Ph.D. in philosophy and says: Philosophy! Who needs it!  Rorty is a good recent example, though certainly just the latest in a long line.  Diogenes of Sinope, in his own way, eating garbage and pleasuring himself in the agora, was out to show what a waste of time it is to theorize instead of simply to live, to live!  There are plenty of people who go much further, such as those who drop out of grad school after one semester because they got a B+ they didn’t like, and go into investment banking and spend their lives berating those who waste theirs in the Ivory Tower.  Now that is anti-philosophy. Rorty and Diogenes, on the other hand, remain susceptible to Shapin’s jab. They are insiders, and their denunciations only work because their social identities were already secured through a demonstration of concern for and interest in philosophy.

I do not know that I would like to join them.  I think I would sooner choose to masturbate at the mall than hope to take on Rorty’s establishment-gadfly role.  I think I would just like to keep writing about what interests me, without being asked, as I all too often am by the short-sighted philosophical presentists who hear of my various research concerns: Where’s the philosophy?!  For that is precisely the question I have been asking of them.

Monday Musing: The Palm Pilot and the Human Brain, Part III

Part III: How Brains Might Work, Continued…

180pxpalmpilot5000_2In Part I of this twice-extended column, I tried to explain how it is that very complex machines such as computers (like the Palm Pilot) are designed and built by using a hierarchy of concepts and vocabularies. I then used this idea to segue into how attempts to understand the workings of the brain must reverse-engineer the design which has been provided by natural selection in that case, and in Part II, began a presentation of an interesting new theory of how the brain works put forth in his book On Intelligence by the inventor of the Palm Pilot, Jeff Hawkins, who is also a respected neuroscientist. Today, I want to wrap up that presentation. While it is not completely necessary to read Part I to understand what I will be talking about today, it is necessary to read at least Part II. Please do that now.

Last time, at the end of Part II, I was speaking of what Hawkins calls invariant representation. This is what allows us, for example, to recognize a dog as a dog, whether it is a great dane or a poodle. The idea of “dogness” is invariant at some level in the brain, and it ignores the specific differences between different breeds of dog, just as it would ignore the specific differences in how the same individual dog, Rover say, is presented to our senses in different circumstances, and would recognize it as Rover. Hawkins points out that this sense of invariance in mental representation has been remarked for some time, and even Plato’s theory of forms (if stripped of its metaphysical baggage) can be seen as a description of just this sort of ability for invariant representation.

This is not just true for the sensory side of the brain. The same invariant representations are present at the higher levels of the motor side. Imagine signing your name on a piece of paper on a two inch wide space. Now imagine signing your name on a large blackboard so that your signature sprawls several feet across it. Despite the fact that completely different nerve and muscle commands are used at the lower levels to accomplish the two tasks (in the first case, only your fingers and hand are really moving while in the second case those parts are held still while your whole arm and other parts of your body move), the two signatures will look very much the same, and could be easily recognized as your signature by an expert. So your signature is represented in an abstract way somewhere higher up in your brain. Hawkins says:

Memories are stored in a form that captures the essence of relationships, not the details of the moment. When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form.It is the invariant form that is stored in memory, and it is the invariant form of each new input pattern that it gets compared to. Memory storage, memory recall, and memory recognition occur at the level of invariant forms. There is no equivalent concept in computers. (On Intelligence, p. 82)

We’ll be coming back to invariant representations later, but first some other things.

PREDICTION

Jeff_hawkins_on_stageImagine, says Jeff Hawkins, opening your front door and stepping outside. Most of the time you will do this without ever thinking about it, but suppose I change some small thing about the door: the size of the doorknob, or the color of the frame, or the weight of the door, or I add a squeak to the hinges (or take away an existing squeak). Chances are you’ll notice right away. How do you do this? Suppose a computer was trying to do the same thing. It would have to have a large database of all the door’s properties, and would painstakingly compare every property it senses with the whole database, but if this is how our brains did it, then, given how much slower neurons are than computers, it would take 20 minutes instead of the two seconds that it takes your brain to notice anything amiss as you walk through the door. What is actually happening at all times at the lower level sensory portions of your brain is that predictions are being made about what is expected next. Visual areas are making predictions about what you will see, auditory areas about what you will hear, etc. What this means is that neurons in your sensory areas become active in advance of actually receiving sensory input. Keep in mind that all this occurs well below the level of consciousness. These predictions are based on past experience of opening the door, and span all your senses. The only time your conscious mind will get involved is if one or more of the predictions are wrong. Perhaps the texture of the doorknob is different, or the weight of the door. Otherwise, this is what the brain is doing all of the time. Hawkins says the primary function of the brain is to make predictions and this is the foundation of intelligence.

Even when you are asleep the brain is busy making its predictions. If a constant noise (say the loud hum of a bad compressor in your refrigerator) suddenly stops, it may well awaken you. When you hear a familiar melody, your brain is already expecting the next notes before you hear them. If one note is off, it will startle you. If you are listening to a familiar album, you are already expecting the next song as one ends. When you hear the words “Please pass the…” at a dinner table, you simultaneously predict many possible words to follow, such as “butter,” “salt,” “water,” etc. But you do not expect “sidewalk.” (This is why a certain philosopher of language rather famously managed to say “Fuck you very much” to a colleague after a talk, while the listener heard only the expected thanks.) Remember, predictions are made by combining what you have experienced before with what you are experiencing now. As Hawkins puts it:

These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence. (Ibid, p. 104)

HOW THE CORTEX WORKS

Let us focus on vision for a moment, as this is probably the best understood of the sensory areas of the brain. Imagine the cortex as a stack of four pancakes. We will label the bottom pancake V1, the one above it V2, the one above that V4, and the top one IT. This represents the four visual regions involved in the recognition of objects. Sensory information flows into V1 (over one million axons from your retinas feed into it), but information also flows down from regions to the one below. While parts of V1 correspond to parts of your visual field in the sense that neurons in a part of V1 will fire when a vertain feature (say an edge) is present in a certain part of the retina, at the topmost level, IT, there are cells which become active when a certain object is anywhere in your visual field. For example, a cell may only fire if there is a face present anywhere in your visual field. This cell will fire whether the face is tilted, seen at an angle, light, dark, whatever. It is the invariant representation for “face”. The question, obviously, is how to get from the chaos of V1 to the stability of the representation at the IT level.

The answer, according to Hawkins, lies in feedback. There are as many or more axons going from IT to the level below it, as there are in the upward direction (feedforward). At first people did not pay much attention to these feedback connections, but if you are going to be making predictions, then you are going to have to have axons going down, as well as up. The axons going up carry information on what you are seeing, while the axons going the other way carry information on what you expect to see. Of course, exactly the same thing occurs in all the sensory areas, not just vision. (There are also association areas even higher up which connect one sense to another, so that, for example, if I hear my cat meowing and the sound is approaching from around the corner, then I expect to see it in the next instant.) Hawkins’s claim is that there is a sort of invariant representation at each level of the cortex, of the more fragmented sensory input from the level below. It is only when we get to the levels available to consciousness like IT that we can give these invariant representations easily understood names like “face.” Nevertheless, V2 forms invariant representations of what V1 is feeding it, by making predictions of what should come in next. In this way, each level of cortex develops a sort of vocabulary in terms that are built upon repeated patterns from the layer below. So now we see that the problem was not how to construct invariant representations in IT, like “face,” from the three layers below it. Rather, each layer forms invariant representations based on what comes into them. In the same way, association layers above IT may make invariant representations of objects based on the input of multiple senses. Notice that this also goes along well with Mountcastle’s idea that all parts of the cortex basically do the same thing! (Keep in mind that this is a simplified model of vision, ignoring much complexity for the sake of for expository convenience.)

In other words, every single cortical region is doing the same thing: it is learning sequences of patterns coming in from the layer below and organizing them into invariant representations that can be recalled. This is really the essense of Hawkins’s memory-prediction framework. Here’s how he puts it:

Each region of cortex has a repertoire of sequences it knows, analogous to a repertoire of songs… We have names for songs, and in a similar fashion, each cortical region has a name for each sequence it knows. This “name” is a group of cells whose collective firing represents the set of objects in the sequence… These cells remain active as long as the sequence is playing, and it is this “name” that gets passed up to the next region in the hierarchy. (Ibid. p. 129)

This is how greater and greater stability is created as we move up in the hierarchy, until we get to stages which have “names” for the common objects of our experience, and which are available to our conscious minds as things like “face.” Much of the rest of the book is spent on describing details of how the cortical layers are wired to make all this feedforward and feedback possible, and you should read the book if you are interested enough.

HIERARCHIES AGAIN

As I mentioned six weeks ago when I wrote Part I of this column, complexity in design (whether done by humans or by natural selection) is achieved through hierarchies which build layer upon layer of complexity. Hawkins takes this idea further and says that the neocortex is built as a hierarchy because the world is hierarchical, and the job of the brain, after all, is to model the world. For example, a person is usually made of a head, torso, arms, legs, etc. The head has eyes, a nose, a mouth, etc. A mouth has lips, teeth, and so on. In other words, since eyes and a nose and a mouth occur together most of the time, it makes sense to give this regularity in the world (and in the visual field) a name: “face.” And this is what the brain does.

Have a good week! My other Monday Musing columns can be seen here.

Monday, May 1, 2006

Monday Musing: What Wikipedia Showed Me About My Family, Community, and Consensus

Like a large number of people, I read and enjoy wikipedia. For many subjects on which I need to quickly get a primer, it’s good enough, at least for my purposes. I also just read it to see the ways the articles on some topics expand (such as Buffy the Vampire Slayer), but mostly to see how some issues cease to be disputed over time and congeal (the entries on Noam Chomsky are a case in point), and to witness the institutionalization of what was initially envisioned to be an open and rather boundless form (in fact there’s a page on its policies and guidelines with a link to a page on how to propose policies). For someone coming out of political science, it’s intriguing.

To understand why, just look at wikipedia’s “Official Policy” page.

Our policies keep changing, and their interpretation as well. Hence it is common on Wikipedia for policy itself to be debated on talk pages, on Wikipedia: namespace pages, on the mailing lists, on Meta Wikimedia, and on IRC chat. Everyone is welcome to participate.

While we try to respect consensus, Wikipedia is not a democracy, and its governance can be inconsistent. Hence there is disagreement between those who believe rules should be explicitly stated and those who feel that written rules are inherently inadequate to cover every possible variation of problematic or disruptive behavior.

In either case, a user who acts against the spirit of our written policies may be reprimanded, even if no rule has technically been violated. Those who edit in good faith, show civility, seek consensus, and work towards the goal of creating a great encyclopedia should find a welcoming environment.

It’s own self-description points to the complicated process, the uncertainties, and tenuousness of forming rules to making desirable outcomes something other than completely random. Outside of the realm of formal theory, how institutions create outcomes, especially how they interact with environmental factors, cultural elements, psychology is, well, one of the grand sets of questions that constitute much of the social sciences. All the more complicating for wikipedia is that fifth key rule or “pillar” is that “wikipedia doesn’t have firm rules”.

Two of these rules or guidelines have worked to create an odd effect. The first is a “neutral point of view”, by which wikipedia (which reminds us that it is not a democracy) means a point of view “that at is neither sympathetic nor in opposition to its subject. Debates are described, represented, and characterized, but not engaged in.” The second is “consensus”. The policy page on “consensus” is short. It largely discusses what “consensus” is not.

“Consensus” is, of course, a tricky concept when flushed out. To take a small aspect, people in agreement need not have the same reasons or reasons of equal force. Some may agree that democracy is a good thing because anything else would require too much time and effort in selecting the smartest, most benevolent dictator, etc., and another may believe that democracy is a good thing because it represents a polity truly expressing a collective and autonomously formed judgment. Sometimes, it means not just agreeing on positions, but also on reasons and the steps between the two. In wikipedia’s case, it seems to consist of reducing debate to “x said”-“y said” pairs and an enervation of issues that are points of deep disagreement.

One interesting consequence has been that the discussion pages, free of the “neutral point of view” and “consensus” requirements, have become sites of contest, often for “cites of contest”. Perhaps more interestingly, they unintentionally demonstrate what can emerge in an open discussion without the neutrality and consensus constraints.

180pxnasrani_menorahjpgI was struck by this possibility a few weeks ago when I was looking up Syrian Orthodox Christians, trying to unearth some information on the relationship between two separate (sub?)denominations of the church. The reason is not particularly relevant and had more to do with curiosity about different parts of my family and the doctrinal and political divides among some of them. (We span Oriental Orthodox-reformed, Oriental Orthodox, and Eastern Catholic sects and it gets confusing who believes what.)

While looking up the various entries on the Syrian Orthodox Church and the Syro-Malabar Catholic Church, I came across a link to an entry on Knanayas. Knanayas are a set of families, an enthic (or is it sub-ethnic?) community within the various Syriac Nasrani sects in South India, and to which I also belong.

The entry itself was interesting, at least to me.

Knanaya Christians are descendants of 72 Judeo-Christian families who migrated from Edessa (or Urfa), the first city state that embraced Christianity, to the Malabar coast in AD 345, under the leadership of a prominent merchant prince Knai Thomman (in English, Thomas of Cana). They consisted of 400 people men, women and children, from various Syriac-Jewish clans…Before the arrival of the Knanaya people, the early Nasrani people in the Malabar coast included some local converts and largely converted Jewish people who had settled in Kerala during the Babylonian exile and after…The Hebrew term Knanaya or K’nanaim, also known as Kanai or Qnana’im, (for singular Kanna’im or Q’nai) means “Jealous ones for god”. The K’nanaim people are the biblical Jews referred to as Zealots (overly jealous and with zeal), who came from the southern province of Israel. They were deeply against the Roman rule of Israel and fought against the Romans for the soverignity of the Jews. During their struggle the K’nanaim people become followers of the Jewish sect led by ‘Yeshua Nasrani’ (Jesus the Nazarene).

Some of history I’d known; other parts such as being allegedly descendants of the Qnana’im, I did not. Searching through the pages on the topics, what struck me most was nothing on the entry pages, but rather a single comment on the discussion pages.180pxkottayam_valia_palli02 It read:

I object to the Bias of this page. We Knanaya are not all Christians, only the Nasrani among us are Christians. Can you please tone down the overtly Christian propaganda on this page and focus more on us as an ethnic group. Thankyou. [sic]

With that line, images of the my family’s community shifted. It also revealed something about the value of disagreement, and not forcing consensus.

Ram, who writes for 3QD, explored multiculturalism, cultural rights, and group conflict in his dissertation. He is fairly critical of the concept and much of the surrounding politics, as I am. Specifically, he doesn’t believe that there are any compelling reasons for using public policy and public effort to preserve a culture, even a minority culture under stress. For a host of reasons, some compelling, Ram believes that minority cultures can reasonably ask for assistance for adjustment, but cannot reasonably ask the rest to preserve their way of life. One which he offers, one with which I agree, is that a community is often (perhaps eternally) riddled with conflicts about the identity, practices and makeup of the community itself. These conflicts often reflect distributions of power and resistance, internal majorities and minorities, and movements for reform and reactions in defense of privilege. Any move by public power to maintain a community is to take a side, often on the side of the majority. (Now, the majority may be right, but it certainly isn’t the role of public power to decide.)

But the multicultural sentiment is not driven by a desire to side with established practices within a community at the expense of dissidents and minorities. Rather, it’s driven by a false idea that there’s more consensus that there is within the group. The image is furthered by the fact that official spokesmen, usually religious figures, are seen as the authoritative figures for all community issues and not merely over religious rites, and by the fact that minorities such as gays and lesbians are labeled as shaped or corrupted by the “outside”. Forced consensus in other areas, I suspect, suffers from similar problems.

When articles on wikipedia were disputed more frequently, the discussion pages were, if anything, more filled with debate. Disputes have not been displaced onto discussions pages; and if they’ve become more interesting, it is only relatively so. Since the 1970s, ever since political philosophy, political theory and the social sciences developed an interest in ideal speech situations, veils of ignorance, and deliberation, there’s been a fetish made of consensus. Certainly, talking to each other is generally better than beating up each other, but the idea of driving towards agreement may be doing a disservice to politics itself. It was for that reason I was quite pleased by the non-Christian Knanya charging everyone else with bias.

Happy Monday and Happy May Day.