Life of a Cannibal

There is Quiet Street. On the bus, on our way to play sports on Randalls Island or Wards Island (depending on the season), we cross the first leg of the Triborough Bridge. Before we get to the Triborough, there is Quiet Street. It is 124th, or 126th. Maybe 120th. I can’t remember. The bus turns right off 3rd Avenue, and we must be silent. We are schoolboys, grades 7, 8 and 9, and we are as loud and shitty as you would expect. It’s a comfortable bus, a coach. Luggage racks, armrests that fold back and forth into the seat, a toilet in the back. No video screens. It’s the early eighties, and Mr. von Schiele or Mr. Trauth is standing up front by the bus driver, flexing his immense forearm. Mr. Trauth stutters. Mr. von Schiele’s first name is Per. We turn right off 3rd Avenue and everyone shuts up. We look out the windows. Someone once threw a rock at us on this street, and now we call it Quiet Street and we don’t talk. Nick and I created a sign language. I can’t remember if we did it because of Quiet Street, but it seems unlikely that we’d learn how to communicate with our hands just for one block.

There is the diner on Montague Street, where Beth and I order grilled cheese. It’s our place. Happy Days Diner, sunk into the street, full of irritable waiters and bad food. It’s next to a newsstand, and it’s got tables outside, but I’ve never seen anyone sitting at them. I write a poem for Beth that mentions the bright orange cheese of Happy Days, and the fact that she calls the subway ‘The Metro’. A euphemism, I call it. But she’s like that, sweet like that. She is from Boston, not Washington.

There is the corner of 91st and Park. I stand on the front steps of Brick Church with my choir and sing carols as Carnegie Hill’s Christmas trees light up in unison. We are golden-throated, I assure you. I sing weddings and funerals for hire. I am bluff tonight, familiar and smiley with my choir mates. It’s Christmas soon, and I am special in front of everyone’s eyes, and the air is crisp on my skin. It makes me feel confident. There will be a party afterwards.

There is the therapist’s on 82nd Street. I get to skip out on work for this, take off at 2:30, get stoned quick in the park maybe, and head for her office. She finds me deeply attractive, and she’s baffled by me. She’s not smart enough, of course, to make this worthwhile, but it’s a thing I’m doing. I enjoy discussing my thoughts and feelings. It amuses me to impress her with my complexity. I like pacing outside her building, too, smoking a cigarette. Her window’s right there, and I wonder if she ever looks at me before our appointment. Lots of young women are out at this time with their dogs and their children. I’ve had trouble walking lately. I’m aware of every step I take, and I’m aware that you’re aware, and the anxiety of performance is hobbling me. I’m a little shaky. I tell her about it. When the time comes for me to end the relationship, after a year of twice-a-week meetings that were, from the outset, futile, I am regretful but firm. After I close her office door for the last time and make my way across her vestibule, I hear her scream in frustration.

There is the northwest corner of 19th and 5th, catty-corner from my office. I lean against a building and smoke a cigarette. An SUV full of young black men drives by. They’re hanging out the windows. “YOU stand THERE,” one of them calls out to me, pointing authoritatively. I give him the finger and say “Fuck You!” cheerfully, a wide smile on my face. They pull over at the southwest corner of 19th and 5th and leave their hazards on. There are four or five of them–big, healthy gents. They surround me. They would like for me to apologize. I try to explain that it’s ridiculous for them to tell me to do what I am already doing, but they don’t want to listen. Eventually, in a tone I have carefully modulated to be sarcastic enough to spare my pride but not so sarcastic that I will get punched in the face, I say I’m sorry. 

There is Union Square. Jane meets me in the park at lunchtime. She is much younger than me, and has spent the day being admired by men. She says, “Boy, all you need to do is wear a tight skirt!” and I want to hit her for being coy. She asks about Beth. I shrug. I am driving her back to Vassar. We will spend one night in a wood-paneled motel room in Poughkeepsie, and then she will put on crappy jeans and a loose t-shirt and disappear. 

There is Fort Greene Park, where scores of dogs run off their leashes in the mornings before 9. I cut across the grass and worry vaguely that I am stepping in their shit. Dogs do smile. It’s painful to see them each morning, chasing each other, looking back to check that their people are watching. I am going to work in my worn shoes, hair wet with gel, and I am full of dread.



monday musing: df

Flying in at night affords a remarkable sight. You can’t imagine such a blanket of lights. And they end so abruptly at the ever widening borders of the city. Beyond them is nothing, just the blackness of the land at night, as if the entire cosmos were merely lights and void. It’s beautiful and big and strange.

***

They just call it DF, for Distrito Federal. Those from the US know it as Mexico City if they know it at all. And my general impression is that we, Americans, don’t know much about Mexico and don’t care very much that we don’t. The very fact that we call ourselves Americans (what about the rest of the Americas?) is kind of a give away to that basic neglect. If you’ve ever known any Canadians you know that the neglect is felt up North too. Come to think of it, if you’ve ever been anywhere else on the entire globe you’ve probably realized that Americans aren’t generally recognized for their great knowledge and concern for the rest of the world. And that is nothing new. Everyone knows it. No one much knows what to do about it. And if the center of world power really does shift toward China in the generations to come we’ll all find out that xenophobia and self-centeredness weren’t invented in America either.

But still, it sucks. Being in DF just last week I was struck with a sense of shame at my own lack of knowledge and paltry understanding of all things Mexican. Americans on the whole, I’ll wager, tend to think of Mexico as dusty towns where nothing is going on, as a kind of no man’s land that simply produces streams of human beings headed for the borders of Texas and New Mexico and California, of poverty and corruption with a dash of violence and drugs. And those things exist. The history of Mexico from the Conquistadors in the 16th century up until the present is partly a history of continuous political upheaval, economic turmoil, and sometimes just straight up weird shit. And through it all the US, to its enduring shame, played little role but to take advantage of the bad times whenever it seemed convenient (see the snatching of a good chunk of Mexico during the 19th century with as shabby a causus belli as has been offered since, . . . well, I guess they’re usually pretty shabby).

Which brings up a number of questions that ought to weigh on the mind in this newish century. Why is it that Mexican history followed a course so different from American in terms of political stability and economic development (and this applies to much of Latin America as well)? And why have the burdens and inequalities of Mexican history maintained themselves so stubbornly against the proposed solutions from all sides of the political spectrum? One answer, of course, is that the American colonists achieved two things simultaneously that proved difficult to do in the Mexican context. The Americans maintained a kind of political and economic continuity with the old country and decisively achieved their own independence at the same time. Mexico, by contrast, continued to be seen as a cash cow for Spain much longer and the process of independence was much more tumultuous. The hacienda system by which the Spanish colonists extracted wealth and labor was so brutal and retarding to political and economic development it boggles the mind. It is still having its effects. In the 19th century, the Mexican constitution was a document to re-write at ones leisure after the seemingly endless coups, revolutions, dictatorships, upheavals, and so forth.

On the other hand, US stability was achieved at a price, a pretty terrible one. The indigenous populations of North America were essentially wiped out. Things were less complex because they were made that way . . . by a genocide. There was genocide in Mexico too. The collapse of the indigenous populations through disease and maltreatment at the end of the 16th century was staggering. But the population and the history wasn’t wiped away completely. There was too much there.

***

We bought a huge bundle of bright orange flowers and I trudged into the cemetery carrying them on my shoulder along with a stream of thousands of families on a Wednesday morning, bright sun, Dia de los Muertos. The great tombs to Mexico’s modernist poets and artists and intellectuals are like tombs to modernism itself with their spheres and blocks and slabs and geometric severity. We put flowers on some of the great ones. But that is not the most compelling part of the cemetery by a long shot. In truth, we weren’t at all prepared for the human emotion of it. Because the Day of the Dead in Mexico is about bringing people back to life again, if but for a moment. The care with which whole families are washing down and cleaning up the tombs of their loved ones becomes almost overwhelming. They are preparing whole meals for themselves and for the dead. They have hired Mariachi bands to play the favorite songs of the dead. They have resurrected the loves and needs and desires of the people they have lost. It is done with a sense of celebration that seems appropriate to the act of making life out of death. But a sadness is in the air too. Because the dead are dead. And if you can watch these families in their tender acts at the graves of those they have lost without your throat tightening up then you just aren’t paying attention.

***

In America, in the US version of America, it is easy to think of the New World as really a new world. The civilizations of the indigenous populations of North America were relatively easy to wipe away. They weren’t as complicated, intricate, and urban as the civilizations of the Teotihuacans or the Aztecs or the Mayans, to name a few. The Teotihuacan pyramids outside of DF are the ruins of a civilization that was not screwing around. It was big and organized and complex and it mobilized vast amounts of surplus labor to build some truly stunning, crazy crap. God knows it must have been awful to have ended up in the slave labor teams that carried the stones that built these monuments.

Ironically, one of the reasons that the Conquistadors were able, with such small numbers, to defeat such massive civilizations was because the peoples of Mesoamerica were already so busy exploiting the living shit out of one another. Divide and conquer. Play grievance off of grievance. Of course, once the Aztecs were laid low the indigenous peoples of Mexico down to the tip of South America were exposed to a kind of brutal oppression that would have made the Aztecs, with their rather curious need to pull human hearts from people’s chests at the tops of their temples, look rather touchy feely. It is difficult to think of all the human misery without feeling sick. The rare figures like the Spanish theologian Bartolome De Las Casas who argued in the 1540s that the Indians might, in fact, be human beings worthy of being treated as such, were notable precisely in how much they were the exception to the rule.

But there was too much of a civilization in Mexico, too many practices and beliefs and ways of life to wipe them all away completely. One of the most amazing things about being in Mexico, DF or elsewhere, is in realizing the degree to which these identities still survive in various ways. They are still part of the self-understanding of Mexicans today, especially now, after all the independence struggles and the way these struggles reached back into the pre-Columbian history in order to forge a new sense of nationalism that was not simply borrowed from Spain. The Aztecs are still around, kind of. Quetzalcoatl lives, sort of.

***

The Metro in DF smells exactly like the one in Paris, which is mildly disconcerting. It turns out the French built it. It’s a pleasure to ride. But I think I prefer the microbuses. You can take one for two and a half pesos (10.75 to the dollar). The back door is usually swinging open as the minibus putters along. People jump on and off almost as if it’s a Frisco streetcar. You can really get a feel for the vast seemingly limitless city in the microbus. DF isn’t a beautiful city in the standard sense of the term. It is too ramshackle and helter skelter for that. It includes the fanciest of contemporary architecture and the most miserable in shanty town construction. You’ve never seen people as rich as you can see ‘em in DF and you can find people with absolutely nothing too, literally dirt poor. The streets wind every which way without much reason, across spindly overpasses and back down into the heart of tree lined neighborhoods and then, whoosh, into a giant square or roundabout that spits you into the colonial center with cobblestone roads lined with old world structures and the occasional 16th century marvel. The Zocalo is a square that dwarfs anything of human scale. It is a square built to say something, though I’m not entirely sure what. If nothing else, it says, “we can do some serious shit here, too.” The cathedral explodes in the middle of the square in a fit of Churrigueresque Baroque that makes regular Baroque look like it was holding back. And then maybe the road keeps going out away from the center again, through neighborhood after anonymous neighborhood until the structures drift away into hastily built concrete boxes no one had time enough to paint. And those drift away into things thrown up with even less time and less material, merely the stuff that could be scrounged. And now the streets are barely streets, just winding dirt passages between shacks of all manner and size. And then those thin out as well and there is only scrubby brush and mild rolling hills and the dull rumble of the city whirring away in the distance.

***

In Samuel Huntington’s most recent pique of civilizational hysteria he lamented that Mexicans aren’t doing as good a job of assimilating into US culture as other immigrant groups have. This, he surmises, constitutes some kind of threat to the integrity of the American project. I’d say he has it ass backwards. There is a historical opportunity here to become a little more Mexican and I think we ought to take it up. It would be a start, at least, in redeeming ourselves after centuries of being an overall shitty neighbor in every conceivable way. Or look at it from a more selfish point of view. There is too much interesting shit about Mexico and Mexicans to let them get away with keeping it all to themselves. Becoming a little more Mexican would be a way to take better advantage of everything the North American continent has achieved in the way of human beings and the funny dumb amazing things they do. That’d be the chingon thing to do, becoming a little more Mexican.

Monday, November 7, 2005

Dispatches: Divisions of Labor

This Wednesday, graduate students at New York University will begin a strike with the intention of forcing NYU’s administration to bargain with their chosen collective agent, the union UAW/Local 2110.  (Full disclosure: I am one such, and will be participating in this action.)  This situation has been brewing for months, as the union’s contract with NYU, dating to 2001, expired in August.  That contract was something of a historic event, since it recognized the right to unionize of graduate students at a private university, for the first time in U.S. history.  That decision was helped along by a (non-binding) arbitration by the National Labor Relations Board, which ruled that NYU graduate students were workers in addition to being students.  Owing to turnover in the Board, however, and the Bush administration’s appointment of anti-labor replacement members, the Board has since reversed its precedent-setting decision in reference to a similar dispute at Brown University.  Though this is not a binding decision, and NYU is under no compulsion to derecognize its graduate student union, this is exactly what it has done, albeit while attempting to produce the impression that they tried and failed to come to an agreement. 

In actuality, the administration did not come to the bargaining table until August, at which point they issued a ultimatum to the graduate student union, insisting that they agree to a severely limited ‘final offer’ in forty-eight hours.  Made in bad faith, this offer was of course rejected, as it would have been impossible even to organize a vote of union members in the span allotted.  But it did allow President John Sexton and his administration to claim to have made an offer and been refused, which they have lost no opportunity to repeat in a series of inflammatory emails and letters to the student community.  Last week, in the wake of the union’s announcement that members had voted by approximately eighty-five to fifteen percent in favor of striking, the university administration sent yet another such communique, this time from Provost David McLaughlin, with the subject line ‘UAW votes to disrupt classes.’  Strategically, the administration seems to believe that by avoiding all reference to graduate students, and instead identifying the source of ‘trouble’ as ‘Auto Workers,’ the undergraduates and larger community might be convinced that the strike is simply the deluded power grab of a few dissatisfied individuals in league with an extraneous group.  Yet the transparency of the university’s rhetoric has had the opposite effect: undergraduates and faculty alike have been radicalized in support of graduate students.  My own students, for instance, have been extremely understanding, comprehending perfectly that graduate students’ teaching, for which a paycheck is received, tax and social security having been withheld, is work.

Politics, it is said, makes curious bedfellows.  One such pairing has occurred as a result of the current NYU situation.  The physicist Alan Sokal is best known for his 1996 article, published in Social Text, attempting to expose the spurious nature of references to math and science in the work of high theorists such as Derrida and Lacan.  His article, which purported to demonstrate twentieth-century physics’ confirmation of the anti-realism of post-structuralist theory, was duly accepted and published in a special issue of the journal on ‘Science Wars.’  After his revelation that the article was a ‘hoax,’ a small repeat of the two cultures conflagration ensued.  Responses abounded, including a rebuttal from Andrew Ross, professor of American Studies at NYU and the co-editor of the issue.  The entire episode has received its most detailed accounting and most rigorous intellectual genealogy (tracing the roots of this debate back to the development of logical positivism) in an article by yet another NYU professor, John Guillory.  That piece, ‘The Sokal Affair and the History of Criticism,’ is well worth reading, not least for its clarification that the stakes in the two cultures debate are not necessarily related to one’s position vis-a-vis ‘postmodernism’ or cultural studies, which, as Guillory convincingly demonstrates, has no fixed relation to particular political stances.  The current labor strife at NYU reconfirms this analysis, as Sokal and Ross find themselves on the same side of the barricades as members of Faculty Democracy, the faculty organization urging the NYU administration to bargain with the graduate student union.

Sokal has written this clear-eyed summary (to which Robin previously linked) of the issues involved, and his commonsensical tone is a much-appreciated palliative in the midst of rhetorically overheated statements issuing from many quarters.  Perhaps most importantly (and most ironically for someone who has been lambasted as an ‘unreconstructed’ leftist), Sokal points out that the paternalism of the administration’s position should be rejected.  Whether or not one considers the graduate students to be right in their cause, he points out, their democratically decided resolution to collectivize in order to negotiate contracts should be respected.  Sokal’s distaste for the increasingly rapid transformation of the university into an institutional substitute for parental duties (and a remedial solution to the decrepitude of public high school education) is one I share.  I would add that the current impasse is more a matter of structural conflict than of political sympathy.  Private universities as they exist today depend on a pyramidal structure: a large number of graduate students at the bottom of the labor force are needed to perform much of undergraduate teaching, while at the top of the pyramid are ever fewer tenured faculty.  Even these can be further divided into ‘stars’ who command greatly disproportionate income while having few teaching responsibilities, and the lower order of adjunct professors who perform much of the remedial education in such subjects as composition.  This system necessarily produces many more credentialed Ph.D.’s than the labor market can employ at the higher levels, which in turn means that many graduate students spend years teaching for a pittance without making it to the security of a tenured position.  Hence the pressure on this beleaguered strata to unionize, so as to ensure a modicum of stability of salary and benefits. 

Though they are a transient class, passing through degree programs, as opposed to a permanent workforce, graduate students have thus come to bear a very large portion of the daily labor of teaching undergraduates.  Interestingly, what the commotion about this strike has appeared to ignite at NYU is a debate about collectivism.  While many are fully willing to grant a certain ethical status to the picket line, and so defer to the right to strike that has been the hallmark victory of the labor movement, the union’s opponents (be they faculty or students) tend to retreat to personal responsibility as the ground on which to base such decisions.  Thus does American individualism reappear in the debate, as usual licensing those for whom ethical imperatives are always imposed from without, rather than perceived from within.  Combined with the detached, analytical impulse that intellectual work requires, this produces a strong ideological propensity for members of this particular class of workers to dissent from counting themselves as part of a collective organization, with the exception, of course, of their belonging to the university itself.  Membership in the university, however, is mystified by the institution’s self-image as the social location outside of or beyond corporate culture and other more baldly hierarchical sites of work.  That the intellectual and editorial freedom that the university trumpets as its role to protect might conflict with the conditions under which that freedom is maintained is a paradox that remains all too often unclear to the participants in this debate.  Whether the temptation to fall back on just this founding myth of the university will prove to be the union movement’s undoing, we shall discover in the days and weeks ahead.

Dispatches:

Local Catch
Where I’m Coming From
Optimism of the Will
Vince Vaughan…Eve Sedgwick
The Other Sweet Science
Rain in November
Disaster!
On Ethnic Food and People of Color
Aesthetics of Impermanence

Rx: The War on Cancer

I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal. America has long been the wealthiest nation in the world. Now it is time we became the healthiest nation in the world.–-President Richard M. Nixon in his 1971 State of the Union address.

I. THE SITUATION AT PRESENT

The famous Greek doctor and the author of the Hippocratic oath defined cancer as a disease which spreads out to grab parts of the body like “the arms of a crab”. What proves fatal for the victim is the spread of the cancer cells beyond the site of origin, and in this sense, it was a thousand years later that Avicenna of Baghdad, noticed that “a tumor grows slowly and invades and destroys neighboring tissues”. Faithful to its name in more ways than could possibly have been anticipated by Hippocrates, the disease which has launched the $200 billion “War on Cancer” in America continues to spread, invading the lives of almost every family. Based on available data, there were 10.9 million new cases of cancer worldwide, 6.7 million deaths, and 24.6 million persons who had been diagnosed with cancer in the previous five years. Like in many other areas, unfortunately the USA is leading this one as well. More than 1.5 million Americans develop cancer each year claiming some 563,700 lives, killing more Americans in 14 months than the combined toll of all wars the nation has ever fought (a new cancer is diagnosed every 30 seconds in the United States, about 1,540 dying each day from their disease). A look at the worldwide incidence of cancer raises some puzzling issues, especially related to the unexpectedly high incidence of cancers in the USA:

Clip_image001

If we ascribe the increased incidence solely to the aging of the population, then why is the incidence so much higher in the USA than in some of the developed countries where life expectancy is comparable? On the other hand, if lifestyle is more important as suggested by the association of smoking and cancers of the lung, then why is this incidence not equally high in countries where people smoke at least as much as in the USA? One answer could be that the smokers in countries such as South America or India do not live long enough to develop cancer. However, the incidence of lung cancer in Sweden with an average life expectancy of 80.3 years is less than half of that in the USA which has a life expectancy of 77.4 years ((22 versus 55.7 per 100,000 respectively) even though 22% adults smoke in both countries. This suggests that lifestyles may be important, but that smoking may not be the only important factor.

Table 1. Approximate incidences of daily smoking among adults in different geographic areas.

SwedenEuropeEastern
Europe
the Middle
East
TurkeyThe rest of
Asia
South
Amerika
Africa
Man224660416344-544029
Female2426288244-7214

Källa: Tobacco Alert. Geneva: World Health Organization, Programme on Substance, 1996

Clip_image001_1

The idea that genetic predisposition to cancer may have something to do with these high American numbers has been largely laid to rest by the experience of the Japanese who immigrated to America. Both the incidence of cancer, as well as the types of cancer, were quite different between the fresh immigrants and their American counterparts, however these differences disappeared in the second generation Japanese Americans who adopted the local lifestyle.

II. PROBLEMS IN THE FIGHT AGAINST CANCER

President Nixon declared the War on Cancer 34 years ago using the 100 words which stand as an epigraph for this essay. Acknowledging the fact that the tools necessary to accomplish the task were missing, the mandate was to invest money in research and apply the results to reduce the incidence, morbidity and mortality from cancer. After spending approximately $200 billion (if we add up the taxes, industry support etc) since 1971 on this war, 150,855 experimental studies on mice and publication of 1.56 million papers, the results can best be summarized in this one graph:

Clip_image001_2

While deaths from heart disease have declined significantly in the last 30 years, the percentage of Americans dying from cancer, save for those with Hodgkin’s, some leukemias, carcinomas of the thyroid and testes, and some childhood cancers, is about the same as in 1950. For the last twelve years, there is a 1% decline annually in cancer mortality. Early detection has had some significant impact, but once the cancer has spread, the outcome has generally not changed for the last half a century.

As someone who has been directly involved in cancer research since 1977, and obsessed by it for longer, I am a first hand witness to the by now familiar cycles of high expectation and deflating disappointments which have been its hallmark for the last three decades. Because the stakes are so high, both in terms of life-death issues as well as the staggering nature of the finances involved, emotions tend to run high on all sides. In this, and a series of subsequent essays, I would like to summarize some of the rather obvious reasons why this war on cancer has not manifested the anticipated, tangible signs of victory so far, as well as the dramatic gains that have been achieved at the scientific level as a result of this unprecedented investment in basic science. I hope that at the end of it all, you will be able to view the score-card dispassionately and declare a winner.

  • Even though President Nixon, and subsequent administrations have continued to invest heavily in cancer research, the dedicated budget for the National Cancer Institute alone rocketing up to more than seven billion this year, the monies are not being spent as wisely as they could be. For example, the funding agencies tend to reward basic research being performed in Petri dishes and mouse models that bear little relevance for humans, 99% investigators using xenografts. Imagine the exceedingly contrived scenario of achieving a “cure” in a severely immune-compromised animal injected locally with human tumor cells and then treated with a strategy being tested. Is it a surprise when the results cannot be reproduced in humans? Basic cancer research may one day be successful at identifying the signaling pathways that determine malignant transformation, however, it will be a long time before the entire process of cancer initiation, clonal expansion, invasion, and metastases is understood, especially in the context of the highly complex, poorly understood micro-environment in which the seed-soil interaction is occurring. Using this approach, an effective therapy for cancer can only be developed essentially after we understand how life works. Can our cancer patients afford to wait that long? Isn’t the history of medicine replete with examples of cures obtained years, decades, and even centuries before the mechanism of action was fully understood for these cures? What about digitalis, aspirin, cinchona, vaccination?
  • There is an odd love-hate relationship that has developed between Academia and the Pharmaceutical industry. On the one hand, major research and development (R&D) efforts by industry, conducted under great secrecy, result in the identification of potentially useful novel agents which nonetheless must ultimately be tested in humans. Credible clinical trials in human subjects are conducted by academic oncologists. On the other hand, advances being made in the laboratories of academic researchers need the partnership of industry for commercial and widespread application. This forces the Industry and Academia to become reluctant bedfellows. Roughly 350 cancer drugs are in clinical trials now.
  • In order for a drug to show efficacy, FDA demands that it be tested first in animal models that are not relevant to humans. To make matters worse, when the drugs are approved for human trials, they can only be tested in terminally ill patients. Many agents that would be effective in earlier stages of the disease are therefore thrown out like the baby with the bathwater. Finally, the end point sought in most drug trials even in end stage patients, continues to be a significant clinical response. Very few, if any, surrogate markers are used to gauge the biologic effects of the drugs. The surrogate or bio-markers include proteins being produced by the abnormal genes, as well as processes and pathways that distinguish cancer cells from normal cells such as formation of new blood vessels or angiogenesis. If a drug does not produce the desired clinical end point, it is then likely to be abandoned completely, even though its biologic activity could be harnessed for more effective use in combination with other agents.
  • As the internet dotcom bubble burst in the 90s, the bi-technology industry was the big winner since some of the best minds in the country made lateral moves and began to invest their talents in this area. The striking change over the last decade in the pharmaceutical industry has been its ability to attract and retain high caliber academic scientists and clinical investigators. Even with this vital infusion, it takes 12-14 years and a prohibitive ~800 million dollars for a pharmaceutical company to get a new drug approved, most of the money having been raised from the private sector which is clamoring for a profit. Following the arduous R&D process and the tedious, time consuming and labor intensive animal studies, by the time a clinical trial is undertaken in human subjects, the stakes are already too high and companies may have to be struggling to demonstrate the tiniest statistical benefits over each other’s products.
  • The catch phrase today is “Trageted Therapies”; the concept that a convergence of science and advanced technologies will illuminate the cumulative molecular mechanisms that ultimately produce cancer, and this will lead to an objective drug design to pre-empt or reverse the cancer process. Except for the drug Gleevec developed against Chronic Myeloid Leukemia (CML), a rare type of leukemia in which single gene mutation underlies the pathology, all other Targeted therapies so far have met with modest successes. For example, the recently approved Erbitux and Avastin for cancer of the colon and rectum improved survival by 4.7 months when given in conjunction with chemotherapy. Even in the area of targeted therapies, the efforts are frequently scattered. Academia, Industry and Institutions such as the NCI, FDA, CDC, EPA, DOD etc are not coordinating their resources efficiently. For example, hundreds of researchers across the nation are performing gene expression and proteomic experiments, diluting the number of specific cancers examined for potential targets instead of developing organized collaborative studies.
  • Research on such topics as epidemiology, chemo-prevention, diet, obesity, life-styles, environment, and nutrition is woefully under-funded.

III. WHAT SHOULD BE DONE TO FIGHT THE WAR MORE EFFECTIVELY

“Stomach cancer has disappeared for reasons nobody knows and lung cancer has rocketed upward for reasons everyone knows,” says John Cairns, a microbiologist now retired from the Harvard School of Public Health. To win this war, some steps that need to be taken are rather apparent while others remain to be carefully debated and planned. For the vast majority of cases, no “cause” can be identified, but cancer is presently believed to be triggered by a combination of genetic predisposition and lifestyle factors such as diet and occupation. Consequently, chances of developing cancer can be significantly reduced by not smoking, adopting a healthier lifestyle, and proper nutrition. Focus is needed in improving methods for early detection, on treating precancerous conditions, (the dysplasias, metaplasias), and on understanding the reasons for susceptibility to the malignant process in individuals and families.

Where research is concerned, man must remain the measure of all things. Human tumors rather than mouse models should be studied directly. To harness rapidly evolving fields like nanotechnology, proteomics, immunology, and bioinformatics, and focus them on serving the cause of the cancer patient, we must insist on collaboration between government institutions (NCI, FDA, CDC, DOD etc), academia and industry. In the case of the Human Genome Project, collaboration was the key to the rapid mapping. The same concerted effort needs to be invested now in sequencing mutations in hundreds of freshly obtained human cancers of all types, a venture which has been proposed as the Cancer Genome Project. It is a well known fact that all those machines and robotics developed worldwide for sequencing the human genome are either sitting idle or being used for sequencing the genomes of microorganisms and fruit flies. They would serve a far better purpose by being employed in sequencing several hundred breast, lung, colon and prostate cancers to identify the most common mutations. Identification of specific mutations will lead to the discovery of seminal signaling pathways unique to organ specific malignant cells which can then serve as therapeutic targets. Given that nature is highly parsimonious, it is likely that some of these pathways would be redundant as was the case with Gleevec. This drug was developed specifically to inhibit the tyrosine kinase of the Abl gene, and has proved to be effective in producing remissions in >97% CML patients. However, it has now been discovered that patients with gastro-intestinal stromal tumors or GIST can also respond to this drug as the cells use the same tyrosine kinase blocked by Gleevec. More recently, thyroid papillary cancers, subsets of patients with other bone marrow disorders (for example those showing translocations between chromosomes 5 and 12) and even cases of as different a disease as pulmonary hypertension, have been found to respond to Gleevec. What this proves is that some key pathways are likely to be present in cancers or even diseases across organs, and their identification could deliver unexpected benefits.

Cancer is a multi-step process that involves initiation, expansion, invasion, angiogenesis and metastasis. Each stage of the disease may offer a variegated set of targets, thereby making the one drug, “magic bullet” approach only feasible in a handful of cancers where single mutations underlie the malignant process (as described above for CML and Gleevec). A critical lesson from developing successful therapy for AIDS is that three drugs targeting the same virus had to be used before effective control of its replication was achieved. Similarly, multiple targets must be attacked at the same time in the cancer cell. The “seed and soil” approach where drugs act on both the malignant cells and their microenvironment would be preferred over those targeting either in isolation. For example, a drug that blocks a key deregulated intracellular signaling pathway and checks the malignant cell’s perpetual proliferation can be combined with an anti-angiogenic drug which stops the formation of new blood vessels and arrests the invasion of tissues by the tumor. The objective choice of agents would require the practice of evidence-based medicine, and this is what the government institutions should be rewarding the investigators for. Many effective therapies directed against components of the seed and soil, are already available, but researchers are only allowed to use one investigational agent at a time, and that too in patients with advanced disease. This stilted and almost self-defeating approach needs to be abandoned. Patients who already have a diagnosis of cancer cannot afford to wait.

I am optimistic that in the next few years, given the power and sheer velocity of the evolving bio-technology, the very fundamentals of cancer research and treatment will have undergone cataclysmic changes. It may not be possible to cure cancer within the next decade, but, in the words of the NCI Director, Dr. Andrew von Eschenbach, it may very well be possible to “transform cancer into chronic, manageable diseases that patients live with – not die from”.

Monday, October 31, 2005

From the Tail: Big Fat Regret

With the recent indictment and resignation  of I. Lewis “Scooter” Libby over the Valerie Plame affair there have been repeated calls from the obvious quarters  for the President to apologize over the CIA leak. Whether Mr. Bush feels any regret over the incident or not, it seems unlikely that he will express any, at least until he absolutely has to.

Regret (the  feeling of disappointment or distress about something that one wishes could be different) is a strange beast. It is, of course quite possible to feel regret without guilt, or even without any acknowledgement of personal responsibility. Also, regret is sometimes the inevitable biproduct of actually making a choice (I’ll take the granola bar over the chocolate cake). Regret is not a rare commodity.  During any given day there are dozens of moments when one expresses some inconsequential level of regret to oneself. So what exactly do people mean when they say (often with a flourish) that they “have no regrets”? Perhaps what they mean to say is that they don’t really regret anything enough, i.e. it all comes down to the degree of regret experienced.  The question about regret that I find interesting is: How is the degree of regret distributed? Is it a Bell Curve? That would be nice. A Bell Curve is conveniently symmetric around its mean (average), that is to say, its median value is the same same as its average. Let’s do the following thought experiment to find out!

  1. Try to remember the things that you have regretted over the past day and count those which rise to the level worthy of reflection in bed tonight. Chances are you will only be able to come up with a couple.
  2. Go back a week and do the same thing. What regretful things have stuck in your craw? Strangely, there are likely to still be a similar number, maybe two or three. How did that happen? Surely, it should be more like seven times your daily regret count. Your threshold for craw stickiness has gone up.
  3. Now do the retrospective on your life and try to catalog your regrets. Again, the number is about the same.

To be more quantitative, if your could go back in time and correctly catalog the list of your regretful incidents over a day, week and month, and graph the degree of regret versus the sequence of incidents, you would probably come up with something like this for the daily , weekly and monthly lists. (The “Real Regret” Zone is is the threshold of regret degree above which you would count it while doing the three step experiment I outlined above.)

The week graph is a blown up version of the month one, and the day graph is a blown up version of the week one, but they all have the same structure. Each graph has  many tiny incidents with a few much bigger ones, and only those incidents above the threshold register as being truly regretful.  A strange “self similarity” or scale invariance is observed.

Now if your regret were distributed like a bell curve it would not look like this. The big events would not be so big, and the scale invariance would not be present. So what is this kind of distribution?

This kind of distribution is Fat Tailed. The reason is that the infrequent events are huge. Most of us are trained to think in terms of Bell Curves, but this is a very different animal. A Bell Curve has the same median and mean. In the Fat Tailed distributions the median is extremely small relative to the mean  since the rare events are so huge.  This skew in the distribution leads to various odd properties, for example, that the variance is infinite!

It turns out that a very large number of things are fat tailed:

  1. The frequency of words in a book. This is what captured the imagination of a Harvard Lingusitics lecturer by the name of George Zipf who discovered that if the most frequently mentioned word in a book was used N times, the Kth most frequent word in that book would be used about N/K times. This relationship holds (with some minor fudge factors) for most books written in the English Language and is known as the Zipfian distribution.
  2. The distribution of population in a city: Remarkably this turns out to be Zipfian as well. Take any developed or developing country and rank the cities. A terrific source of data for a bunch of different countries is here. You will find that the distribution is very close to being Zipfian. The number of people who have scratched their heads about why this happens reads like a who’s who of economics: Herb Simon, Paul Krugman, Benoit Mandelbrot (who isn’t regarded by economists as an economist but actually is), and most recently Xavier Gabaix among lots of others. Even those who don’t think the distribution is Zipfian agree that it is fat tailed.
  3. The number of web links pointing to a web page. Various popular books have been written on this and since it has been covered extensively elsewhere I’ll resist the temptation to expound. 
  4. The number of subscribers to a blog feed. Most blogs are barely read, but a few blogs get a huge amount of subscribers. An interesting set of graphs relating to this has been provided from the folks at Ask Jeeves.
  5. The distribution of income. Ever since Pareto, it has been observed that the rich are few and much richer than the rest. The scale invariance of the distribution results in very rich people actually feeling quite poor! If you think that you’d feel rich with $10 million in the bank, think again!

Fat tailed distributions are more frequently called Long Tailed. (The term “The Long Tail” yields 1.57 million google results as opposed to a paltry 12,700 for the “The Fat Tail”.) In statistics, the rare events of a distribution are said to be in the tail. When these rare events are large, we say that the tail is FAT. Even a bell curve, can have a very long tail, so why the terminology Long Tail? Well, if  you arrange the values in descending order and graph them, i.e. the big ones on the left, and the smallest values to the extreme right, the picture looks like a long tailed beast. Of course, the head of this beast is the tail of the distribution! Also, it seems more natural to say, for example, that the distribution of blog readership has a big fat head and a long tail, but that isn’t really accurate from a statistical point of view. So take your pick of terminology.

Now the question of WHY things are fat/long tailed is still somewhat unclear, but of course,  a multitude of theories abound. Zipf believed it all stems from the tendency of human beings to follow the path of least resistance, so that the inertia in the system tends to make the big bigger. There are various “winner takes all” theories which are commonly spouted in business circles. Economists espouse phenomena of “increasing returns” and “switching costs”, which are appealing in certain contexts. It is all very fascinating and intellectually rich.

But what explains the degree of regret — why is IT fat tailed? I wish I had an explanation, but all I have is more speculation: perhaps circumstances are responsible, i.e. things happen in a “fat tailed” manner so that we react to them that way. Or perhaps it is our reactions which are more responsible, i.e. after an accumulation of little things reaches some limit  (a camel-back breaking limit, so to speak) we react in an extreme fashion.

Finally, it would seem that other emotions, e.g. happiness work the same way. Repeat the experiment and see for yourself.

The mystery of it all! If only  I knew more. And yes, regret has struck again!

Monday Musing: Posthumously Arrested for Assaulting Myself

Those of you who have never taken 20-24 hour flights can probably scarcely imagine the vertigo-inducing fatigue involved. I have taken one of these flights fairly regularly for decades now, from Karachi to New York, and find it hard to understand how my elderly parents ever survived them. In addition to the sheer length of time for which one is confined to one’s (in my case, very small economy-class) seat, these flights almost always originate in the early hours of the morning, giving one just enough time to reach the deeper parts of sleep after having spent the evening packing, buying last-minute things, and saying goodbye to friends, before one must heave oneself up from bed at something like 3 in the morning, say one’s emotional goodbyes to relatives, and head for Qaid-e-Azam International along the deserted Sharia Faisal. So one almost always even starts the trip in a groggy, enervated state. Then, as if almost a full day in the stale, dessicated air and cramped and noisy quarters of a Pakistan International plane (they must have the highest ratio of children to adults of any airline) weren’t enough, there is the time-difference induced jet lag to contend with upon arrival at JFK, and for some reason I am always unable to eat any of the last meal they serve on the plane.

All this is to give you a sense of how I am usually feeling physically and mentally as I stand in the long immigration lines at JFK waiting for my passport to be stamped. And there I was standing one morning about three years ago. When my turn finally came up at the counter, the INS agent asked me more questions than usual, and then closed his counter and asked me to follow him to a large room at the side of the immigration hall. Once there, along with a bunch of other people who had also been pulled aside for extra questioning, I waited for my file’s turn to be examined by the officer at the counter there. (The original INS agent had deposited me and then returned to his duties elsewhere.) Finally my name was called, and after some very aggressive questioning about who I am, what I do, where I live, and on and on (and they frequently keep asking the same questions over and over, making one feel like they are hoping to trip you up in case you are lying), I was informed that I was being detained. Two agents handcuffed me and led me to another smaller room. When I asked what I had done, they said things like, “Oh, you know what you’ve done. You are in trouble, my friend.” Then I asked to call a lawyer, and I was informed that I hadn’t yet been admitted to the United States, and so had no legal standing. No lawyer would be called, nor would I be allowed to call anyone else. They took my cuffs off, fingerprinted me (very difficult because of my sweaty palms), recuffed me and then left me there.

It was at this point that my knees went a little trembly. I had heard many stories of Pakistanis being held under the Patriot Act without charges for months, and now I had visions of Guantanamo in my head, and I became almost dizzy with the adrenaline rush of fear. I thought that I must have been mistaken for someone else, God knows who, and there would be no chance to clear my name. At this point, I was so tired and hungry that I could barely stand up. After a few hours, a woman came to the room to get some papers she needed and I took this opportunity to beg her to let me call my girlfriend. I guess she took pity on me. She took out a cell phone and asked for the number. I told her and she dialed it and then held the phone to my ear (my hands were cuffed behind me). Margit (now my wife) answered the phone and immediately started asking what had happened, why I wasn’t home yet, she was so worried, etc. I told her to stop talking as I didn’t have much time. I told her I had been detained by the INS, and that she should contact a lawyer and my brother immediately, and get someone to JFK. I would try to call her again if I could, but wasn’t sure if I would be able to. To her credit, she was calm, and I felt much better that at least someone knew what was happening to me.

I then sat in that room for another few sweat-drenched hours before a couple of INS officers came in along with two police officers from the NYPD. The NYPD officers told me that they had a warrant for my arrest. This immediately came as a relief to me, because whatever it was they wanted with me, I would rather be held by the NYPD in New York, than in some INS facility. I felt like whatever it was, I would be able to clear it up. That’s when things started to get weird: the NYPD officers addressed me as Mr. Edward Sampson, as in, “Let’s go, Sampson.” When I protested that I wasn’t Edward Sampson, whoever that might be, they told me that fingerprints don’t lie, and I had a full ten-finger match as one wanted Edward Sampson. They told me to stop lying and just admit that I really was Edward Sampson. The name sounded vaguely familiar but I couldn’t quite place it in my exhausted state. The INS guys removed my cuffs and the NYPD officers produced a kind of wide leather belt which looked vaguely like some S&M contraption, put it on me, then cuffed my hands to it. I was then led out for the perp walk in front of all the other passengers, coming out by the regular path where people wait for their friends and relatives to come out. Most people whispered to each other rather excitedly when they saw me being led out, held by each arm by one of the officers, wearing this restraint, and a nice suit I had had tailored while in Pakistan.

It was then that I remembered who Edward Sampson was, and it came to me suddenly: about a decade earlier, my nephew Asad and I had been having a drink with my friend Karim at the West End Restaurant and Bar (where Jack Kerouac and Allen Ginsberg used to hang out) near Columbia University (I had just started the Ph.D. program in philosophy there), when four rough looking characters wandered in. They looked like skinheads, and they sat at the table behind where we were standing at the bar. Asad had draped his jacket over one of the chairs on which one of these guys was now sitting, and so he tapped the guy on the shoulder so he could retrieve his jacket. I saw the guy stand up and get in Asad’s face, but couldn’t hear what was going on. The man then raised his voice and I heard the N-word being yelled at Asad along with a string of curses, after which the man grabbed my nephew’s hair with his left hand and drew back his right fist, getting ready to throw a punch. I hit him first. I had lunged from the side, and my momentum threw both of us to the floor. I didn’t know it then, but I was rolling around on the floor of the West End with one Edward Sampson.

We were separated by the bouncers of the West End and all six of us were thrown out. Once outside, these guys ganged up on me and managed to throw me to the ground where I hit my head on the sidewalk. I was momentarily stunned, and had no chance after that. Mr. Sampson pummeled me pretty good. Then the police arrived, and Sampson and crowd quickly walked off. I explained to the police that my nephew had been assaulted, and while trying to protect him, I, too had been beaten up and the guys were trying to get away. The police told me that if I insisted on having them arrested, they would have to arrest Asad and me as well, since they hadn’t been there to see who started it. When I produced witnesses, they dismissed them as my friends, so I said fine, go ahead and arrest all of us, but I am not going to let these punks get away with this. I figured we would sort it out later in court. And so the four of them were also picked up and all six of us were driven to a precinct where we had our portraits taken, were fingerprinted, etc., before being released on our own recognizance. And at that precinct is where I first heard the name of my attacker: Edward Sampson.

The next day, after a trip to the Manhattan Eye and Ear Infirmary, I showed up at the philosophy department at Columbia with black eyes, swollen mouth, etc. Sidney Morgenbesser was the first to offer his help, and Akeel Bilgrami made a call to a lawyer friend of his. They knew what they were doing, because the next morning I received a phone call from Robert Morgenthau’s office (he was, and might still be, the District Attorney of New York) telling me to go down to the courthouse on Center Street, where a couple of assistant DAs were waiting for me. They listened to my story, called a few of the witnesses, and then told me that charges against Asad and me were being dropped, and that Sampson and his friends would be prosecuted under the hate crimes statute of New York. I was pleased by this, and felt vindicated that I had insisted that the police arrest everyone, rather than just letting these guys walk. Except that those people didn’t show up at their hearing, and were never heard from again.

By the time the NYPD guys had put me into the back of their van outside JFK, I had figured out what must have happened: somehow, that night ten years before, someone at the precinct had made a clerical error, and had somehow put Edward Sampson’s name and other information on my fingerprint card. Then, when they didn’t show up for their hearing, a warrant was issued for Sampson’s arrest (and for all I knew, he might have committed other crimes since), and now I had been arrested as Edward Sampson. This was the only explanation I could think of, and it sounded plausible to me. I excitedly told the NYPD guys this theory, but they were pretty unimpressed. One of them said that people often come up with crazy stories when they get caught, but this was one of the best he had heard. I told him to look at me. Did I even look like I might be named Edward Sampson? I just kept repeating my theory to them until finally, one of them, Detective John Regan of the Queens Warrant Squad, started to believe me, at least a little. He told his partner, “Look, it sounds crazy, but it might be true. While you guys see the judge (I was being taken to a courthouse in Manhattan where I would be presented to a judge, and we needed to get there before midnight, which was getting close, otherwise I would have to wait in lockup overnight) I’ll go try to find the records from that arrest ten years ago.”

At this point, I begged to be given some food, and again, Regan made the other guy stop at a Chinese restaurant and got me a fried rice (which he paid for) and even put hot sauce on it per my request. He then uncuffed me so I could eat. His partner was not happy at this lenient method of treating a just-captured fugitive, but Regan was by now convinced that I just wasn’t the right type of guy to be a criminal. I shall always be grateful for that meal and Detective Regan’s kindness. At the courthouse, Regan disappeared to look for the old arrest record while I was taken into a courtroom where I was appointed a public defender. Now this guy was a complete idiot. He kept telling me to stop lying and just plead guilty to a reduced charge for which I would just get some community service and no jail time. No matter what I said to him, he would not believe that I was not Edward Sampson. Meanwhile, Regan showed up with a file containing the decade-old arrest records, and luckily it had a picture of Edward Sampson in it. But even then, my supposed lawyer kept saying things like, “That could have been you ten years ago.” Finally the judge herself yelled at him and said, “It is unlikely that your client has changed race since that arrest. And why would he have been arrested for beating himself up?” She told me I was free to go. I was then driven by Detective Regan and his partner back to JFK, where I was released. Asad was waiting for me there.

Detective Regan then offered to help me clear up the problem with the fingerprints, and after some detective work, called me with a strange bit of news: Edward Sampson had committed suicide in 1996 by jumping out of his 5th floor window. So I had essentially been arrested as a dead man for beating myself up. However, the fingerprints were now in many different databases, including the FBI, the state police, INS, and God only knows who else. He suggested I find a lawyer to help me clear this up. So I did. I was assured that the problem had been taken care of, and everything was fine, and indeed, I flew in and out of the country several times without incident. Then, three days ago, I got on a train headed to Montreal for Justin’s wedding. At the border, Canadian customs and immigration officials boarded the train for their inspection of the passengers’ documents. I was asked to step outside where the aggressively hostile questioning began. Finally when they asked why I had two social security numbers, I realized that Edward Sampson’s ghost was back to haunt me. I told them the whole story, and luckily, after much heated discussion and some phone calls, they believed me (because all the details I gave them matched what they had, including the name Edward Sampson, which they had not told me, but which they knew). But they warned me that I may be arrested by American immigration officials on my way back, so I was pretty nervous last night. It didn’t happen. Now I don’t know what to do. If any of you have any ideas, let me know.

Have a good week!

My other recent Monday Musings:
Be the New Kinsey
General Relativity, Very Plainly
Regarding Regret
Three Dreams, Three Athletes
Rocket Man
Francis Crick’s Beautiful Mistake
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President

Selected Minor Works: The Question of Marriage

Justin E. H. Smith

I shall have to write this Selected Minor Works piece in haste, for it is less than 24 hours ago that I was married, for the first and only time, and my bride will not have me throwing away the blissful infancy of our life together crouched at my laptop.

But in order not to be completely neglectful it seems fitting that I hang out here, as it were, a bloodied sheet for the digital age, by offering both a précis of the event, as well as some reflections on marriage, what it could mean and why it is a genuine good, both in general and in this particular case.

It was a glittering and star-studded event, featuring leading figures of the Quebec literary demimonde, a well-known flyer distributor for the now mythical 1980s Sacramento new-wave all-ages dance-club scene, and, not least, the editor of 3 Quarks Daily himself. We danced late into the night at a rented hall of the Musée des beaux-arts, all dressed up, as if we were rich, and there was much talk of how heavenly the salmon canapés were, and how lovely the bride. I was of course much more interested in the latter sort of talk. She is stunning, whereas I am experiencing, looks-wise, some bizarre, swift, and evidently genetically predetermined descent from comparison with Gustav Mahler to nearly being mistaken for Karl Rove. I can only assume grace is involved in her decision, or in God’s plan, or whatever brought this miracle about.

But on to important matters. It goes without saying that, in this particular case, there is love involved (are you reading this, O Immigration Canada?), but this is not what I wish to dwell on.  What I want to consider is this: given love, why marriage?

Philosophically, I am not in good company. For Nietzsche, Socrates served as the perfect cautionary tale to any philosopher thinking about the path of marriage. After his shameful example, it is certainly true that, statistically, philosophy and marriage do not tend to occur together. Leibniz, for example, seems to have enquired with a young woman’s father about the possibility of taking her hand, never heard back, and only recalled that this business was outstanding 20 years later, just before his death. His contemporaries, by and large, appear content to have ground lenses, and proved things, and in general to have acted as though there were no women in the world. By the 19th century, marriage comes into its own as a distinctly philosophical issue, with figures such as Kierkegaard taking the problem of marriage as the primary stimulus for productivity. Kierkegaard decided firmly against.

In the 20th century, though, the problem would seem to die out altogether, and marriage to become no more or less problematic for philosophers than for any other segment of the population. Arguably though, this is not because the problem is solved, but only because philosophy is professionalized to the extent that no radical commitments or serious lifestyle measures of any sort are thought to be required. Nobody would dare claim that ‘Two Dogmas of Empiricism’ could have been any richer if not for the sinister influence, from the shadows, of Madame Quine.

But didn’t marriage suffer a crisis in all segments of the population, the first anticipation of which was earlier suffered by the likes of Kierkegaard? Society has been transformed, and marriage displaced as the primary glue that holds it all together. Here in Quebec, we could, after all, certainly get away with not getting married.  There is no social pressure to do so at all, and if anything the pressure is in the other direction.  Cohabitation impresses immigration officials, anyway, much more than a sudden plunge into official coupledom.

So why the plunge? The simplest reason is this: when I met this woman, I knew that ‘partner’ just wasn’t going to cut it.  What I wished to do with her bore no resemblance to what accountants do when they open an office together.  I wanted to draw on antiquated social forms, to go back before the discovery that the personal is political, that families are tyrranies, and declare that this woman was mine, my wife, ma femme, as though the Enlightenment had never occurred.

Philosophy thought it was liberating its practitioners from nagging Xanthippes, and eventually it made it possible for some to think about liberating everyone from what, seen under the aspect of eternal reason, is indeed an arbitrary bond, and one that can’t but limit one’s freedom. But the lack of good reasons, reasons of the sort accountants come up with every day, is what makes marriage better than accountancy, and what makes the modern blurring of the arrangements of the business world and those of the intimate life such a tragedy. Keep your sound and level-headed arrangements, your rational and limited partnerships. I, as the saying goes, shall take my wife.

Poison in the Ink: Gaia Theory

In the 1960’s, a chemist named James Lovelock was invited by NASA to help develop instruments that could detect signs of life on Mars. Lovelock came up with the idea of screening the gases in Mar’s atmosphere for signs that their concentrations were being affected in ways consistent with life.

On Earth, for example, the atmosphere is composed of about 77 percent nitrogen, 21 percent oxygen, and trace amounts of other gases, most notably carbon dioxide and methane. These gases are used and absorbed by plants and animals and then remade and recirculated back into the atmosphere. Mars, on the other hand, has an atmosphere that is almost 95 percent carbon dioxide.

The stark contrasts between the two planets suggested to Lovelock that Mars couldn’t possibly harbor any type of life or that if it did, it was in the very distant past. Chemically, Mars was a dead planet, and to Lovelock, this meant that it had to be biologically dead as well.

If there was life on Mars, it would leave a chemical signature that could be detected from Earth, Lovelock reasoned. The cumulative actions of countless organisms would over time change the composition of gases in the atmosphere and these changes would be visible from space. This kind of thinking lead Lovelock to a sudden realization. Recalling it years later, Lovelock wrote:

“I was in a small room on the top floor of a building at the Jet Propulsion Laboratory in Pasadena, California…An awesome thought came to me. The Earth’s atmosphere was an extraordinary and unstable mixture of gases, yet I knew that it was constant in composition over quite long periods of time. Could it be that life on Earth not only made the atmosphere, but also regulated it – keeping it at a constant composition, and at a level favorable for organisms?”

Lovelock discussed his idea with his neighbor, the novelist William Golding, and it was Golding who suggested Lovelock’s new theory be named “Gaia,” after the Greek goddess of the Earth.

When it was first proposed, Gaia theory appealed to environmentalist but was largely dismissed by the scientific community. Critics said the theory was unscientific and that it was teleological, that it was proposing that there be some kind of planet-wide consciousness at work.

Other critics, like Richard Dawkins and Ford Doolittle, argued that Gaia theory was at odds with Darwinian evolution. Instead of having organisms simply adapt to their environment, Gaia theory was saying that organisms could actually change it or even control it. In 1982, Dawkins claimed that “there was no way for evolution by natural selection to lead to altruism on a Global scale.”

Gaia theory was also at odds with one of Dawkins’ own theories. In 1976, Dawkins published a book entitled “The Selfish Gene” in which he argued that evolution acts not on individual organisms, but on their genes. Organisms were mere vehicles that genes used to replicate themselves. Those genes that helped an organism survive and reproduce also improved their own chances of being passed on and so most of the time successful genes also benefited the organism. For Dawkins, life was a constant war: individuals within a species were competing with one another as well as with the members of other species. What they definitely were not doing, in Dawkins’ view, was working together for the common good of the planet.

In response to his critics, Lovelock teamed up with Andrew Watson and developed a computer model called “Daisyworld.”

Daisyworld was a simulation of an Earth-like planet orbiting a young, Sun-like star. The only form of life on the planet were daisies, of which there were two varieties: black and white. White daisies had white flowers that reflected light and black daisies had black flowers that absorbed light. Thus, a planet covered in white daisies was cooler than one covered in black daisies.

In the beginning, when the young star is just starting to warm up, the planet is covered mostly in black daisies. As the planet continues to warm, however, more white flowers begin to bloom. In this way, the planet’s temperature is kept constant despite fluctuations in the stars temperature.

When it was first introduced in 1983, Daisyworld was roundly criticized by many scientists as being too simplistic. The model did, however, address two important criticisms of Gaia theory. First, it showed that a biologically regulated planet didn’t have to be teleological, that a self-regulating planet could arise without any need for a guiding conscious. Secondly, it showed that Gaia theory and Darwinian evolution were compatible, that indeed, it was natural selection that made Gaia theory work.

Nowadays, there are many different forms of Gaia theory, from “weak” to “strong.” Weak Gaia maintains only that life is important in shaping the Earth. This form of Gaia theory is generally accepted by many scientists today. In contrast, strong Gaia—the form that Lovelock endorses— says that life doesn’t just merely influence the physical processes of the planet, but actually controls them.

Lovelock, now 86-years old, is still working to develop Gaia theory. He believes that if Gaia theory were to become widely accepted, it would fundamentally change how humans view themselves and their environment:

“If we are ‘all creatures great and small,’ from bacteria to whales, part of Gaia then we are all of us potentially important to her well being…No longer can we merely regret the passing of one of the great whales, or the blue butterfly, nor even the smallpox virus. When we eliminate one of these from Earth, we may have destroyed a part of ourselves, for we also are a part of Gaia.

“There are many possibilities for comfort as there are for dismay in contemplating the consequences of our membership in this great commonwealth of living things. It may be that one role we play is as the senses and nervous system for Gaia. Through our eyes she has for the first time seen her very fair face and in our minds become aware of herself. We do indeed belong here. The earth is more than just a home, it’s a living system and we are part of it.”

Monday, October 24, 2005

Poetry and Culture

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

TO SEEK AND FIND

Poetry and limitations of the ironic mode in the new millennium, Part 2

[The first part of this essay can be found here.]

The composer subsequently explained that he saw the work of Lucifer at work in New York, an entity without love, as the negative force in the struggle to create artwork—but the whole tendency to aestheticise experience, and then theorise that experience, shows how stillborn the expected revolution had turned out to be. With claims that language  had been liberated from the old paternal, sexist past, yet another regime battened down the hatches and enforced its own perverse brand of ideological correctness. It was clear that there was a disconnect between what the language theorists said could be done and what had actually been achieved. Rather than wringing the neck of rhetoric, which the modernists were over-fond of quoting as a devoutly-wished consummation, they had invented a Byzantine rhetorical mode all their own, with its arcane, intangible poetry-speak that simply baffled those who didn’t fall for its nostrums and blithe indifference to the actual act of communication. Having missed out on real revolutionary fervour, soixante-huit and all that, they seemed to think artistic change could come about through substitute barricading of printing presses and metaphorical shouting from digital rooftops. Thus their naive nostalgia for Left Bank subfusc Marxism or Greenwich Village groove as they imagined themselves following on in the tradition of Bakunin, Tel Quel, the Situationists, or whoever.

What developed in Europe at the beginning of the twentieth century as a suspicion of feeling, never mind the antithetical examples near to hand of Tennyson and Keats, was ironised beyond recognition in the States where the experimental became a due process, then a status quo, institutionalised by academe and magazine. Of course, there were exceptions—Crane, Frost. Philosophy provided a convenient  resource for those who were growing wary of their own emotions. When Wittgenstein said that words should be distrusted as agents of truth, poets should have rebelled with every fibre of poetic being, since that is where poets find their truth, such as it is. Swathes of poetry read as if they had been cauterised. Burnt verse offerings were mute testimony to the divided self that wanted at all costs to be seen as Modern, echoing on the shores of publication the surf of their rebarbative white noise.

September 11 was a signifier like no other. We had been alerted before by the bloodbaths in Vietnam, Rwanda, Cambodia, East Timor, the Balkans, the Palestinian struggle, world without end. But it took the destruction of the prime symbol of Western capitalism to give the ultimate wake up call to the West. Was our civilisation worth fighting for? Of course it was, but some couldn’t see the writing on the Western wall. The land of the free and home of the brave resided in everyone who honoured parliamentary democracy. Democracy bestowed on us the virtue of citizenship, the citizenship that gave us not just rights, but obligations too. Just what those obligations should be for the artist, the poet, was a piece of hard thinking not done by those undergoing their compulsory fifteen minutes of fame in the moronic inferno (Bellow’s phrase). The sound and fury could turn out to be an insubstantial pageant faded. The long haul across a lifetime of creative endeavour had to be centred in truthfulness, beauty and fidelity to the Muse, Nietzschean scorn notwithstanding, if it was going to thrive once the iron lung of praise was switched off. Time had a special way of sending down to darkness strident pronunciamentos already fading to uncertainty.

Kant said that Hume aroused him from dogmatic slumber. A large swathe of the poetry world is yet to awake from its own dogmatic slumber, so pleased is it with its own enervating, inward-directed gaze. The rest of the world may very well be passing it by: to those manning the approved poetry portals the view looks good—calm seas and prosperous voyage ahead! Semiotics, deconstruction, bricolage, the age of mechanical reproduction, Lacan, Derrida, the uncertainty principle—these were the constituent elements of the postmodern sensibility which had pirated the good ship Romanticism. The ship sailed on through uncertain, nihilistic seas, but almost everyone was happy to be on board. ‘Now voyager sail thou forth to seek and find’ Whitman had intoned. But that was no good anymore. Was it now just ‘Sail forth’ because there was nothing left to find, motion for the sake of motion, language for the sake of L=A=N=G etc. However, being a pluralist did not mean accepting the validity of all that was offered, since a great deal of what was offered was mediocre. The receding tides of imperial might still drew in minorities clinging to the coattails of the deceased as a mountain of books and ezines were launched into bulimic oblivion.

A new humanism is certainly needed, but it cannot be the kind that uses the terminology of race winning—art says more than science; science reaches further, and explains more, than art. A poet has a tremendous amount to learn from the scientific and technological revolution. But a poet is not a scientist, a few professionals aside, and it is our job to use language to describe the world, and our feelings, not try to make language the metaphoric adept of quantum mechanics, string theory or fractals, as much as language might reflect aspects of those subjects in its verse technique.

If we were wise, we would envy the relationship of a Nietzsche and a Wagner. Not wearing the mendacious rose-coloured glasses of Heidegger/Arendt, Sartre/de Beauvoir, here was a relationship based on aesthetic venturesomeness and passionate intellectual confrontation that took no hostages to fortune. There was the inevitable falling-out, as is often the way with heroic minds, and mistaken enterprise on both sides, but what gold was deposited in the artistic and philosophical bank vault along the way! How unsatisfactory seem our poetry schools with their frigid theoretical totemism. Groucho Marx: I wouldn’t want to belong to any club that would have me as a member. Australians were a bit too proud of their ability to make a joke of everything and everyone; a savage irony and sarcasm was waiting for anyone who got above themselves. The trouble was, art required getting above yourself since art reached beyond what could not be grasped. Here was a conundrum for the Australian artist. Would they play up to the ideology inherent in convict history and just laugh at the world, or would they willingly choose the ostracism waiting down the pathway of seriousness—laconic Australian speech with its concern for the pragmatic quotidian being somewhat at odds with the demands of memorability and expression of complex feeling. Well, the punishments were duly handed out: Margaret Sutherland lived a life of virtual exile in her homeland; Francis Webb’s Brucknerian confrontation with land and spirit was all but ignored, Joy Hester’s tormented vision waited for the six-feet-under years before people started to look at what she had given*. Here were some of the artists who took the hard choices. They were ironic about themselves, but not about their art. Feeling, contradictory and complex, and intelligence, disturbing and antithetical, was evident when artists chose to challenge themselves, not with the technological tools at hand, but with artistic solitude, the rigours of the Muse, the uncordial and unlovely, unfunded and often unacknowledged hard slog sparked by the visionary gleam. An ironic homunculus is always at work in art, and is always a part of any useful artistic enterprise, but when that ironic component has come to dominate the sensibility of an art form, then the capacity of that art form to respond to the complexity of the world is vitiated. What can follow in the wake of terminal irony but sterility and irrelevance.

I remember the first time I saw Lucian Freud’s work, and I didn’t like it at all. Only experience and a greater knowledge of figurative painting brought me to the realisation that here was an admirable artistic seriousness and consummate technical skill. Freud made absolutely no accommodation to the -isms and -ologies of the twentieth century, and paid a price, in the interim, for doing so. To look at Freud seriously is a forbidding experience, a truthful, liberating experience. When I look at his paintings and etchings, I also intuit the presence, and influence, of Rembrandt, Constable and Ingres, and know that I have a great deal to learn from being confronted in this way, lessons I once was not ready to receive. It seems that some artists now believe that they have no need of an inheritance. And some poets think they can make language over again. Their knowledge of history is so slim that they haven’t come to the realisation that it is given to very few writers to remake language in that transformational way. It would be a supreme irony if the future decided that these attempts to revolutionise language had reduced the status of poetry to that of a kind of esoteric language game. Irony as stimulus, in Kierkegaard’s terminology, certainly. There could be no greater exemplar of this kind of irony than Shostakovich who enclosed within a satirical and tragic doubleness an entire emotional and intellectual cosmography, an irony capable of quoting both Rossini and Wagner in its last symphonic confrontation. Sachs tells Walther at the end of Die Meistersinger to honour the masters, the mastersingers—Walther has said he has no need of them. Wagner says it is essential to have a knowledge of the art that has brought us to the present moment. This has nothing to do with burying one’s head in the sand—being a classic Stuckist—and everything to do with knowing the good that is not interred with bones, the art that stands as a challenge to everything we achieve and which it is our duty to honour—or what will be left that that will be worthwhile? Here should be the joy, not the anxiety, of influence. We cannot invent the world again; this is the same world that brought forth Aeschylus and Euripides, Newton and Einstein, no matter how enticing the technological marvels, with their intimations of transformational experience, now laid out before us. If a poet believes with John Cage that the goal should be to show there is no goal, that is an irony that will be of little interest to future readers. But that is a choice we make, with the language we use, with our poetics and aesthetics.

Poetry embodies the gold nerve of our condition, the impulse that circumnavigates our world, sewing up in vowels and syllables the extraordinarily vulnerable yet tenacious human condition. We may well feel an overwhelming irony confronting the task of producing art in an era that seems, in some respects, to trivialise art and the ideal of art—that has always been the lot of the artist. But in poetry, and I should say the rest of art too, that irony will never be enough, unless one has given up on civilisation. We must move on from the ironic mode, and from the nihilism and scepticism inherent in the ironic mode of discourse, to a more exploratory and expressive aesthetic. The feeling that comes after irony—this is what we must grapple with, as we work our way towards the source that has led us from amphora to cyberspace. To be worthy of that impulse is the duty of the poet.

*Margaret Sutherland, Francis Webb, Joy Hester: respectively, Australian composer, poet and artist.

                                                                    *

                    Word

That flickers,
Heard
After reels
Collect facts
In an air-conditioned room;

Which fills
Time
With gristle of tongue,
Beginnings and endings
Vowelled;

Whose weight
Trembles
By the fold
Of computer digits
And printouts;

This word,
Beautiful, true, free,
We hold.

Written 1990 Published 1997 A Dwelling Place 82

Dispatches: Local Catch

In yesterday’s New York Times Magazine, Paul Greenberg has written this excellent article on the engandered Patagonian toothfish, or Chilean sea bass, to use the current moarketing moniker.  He also touches on the very serious situation of all the ocean predators that humans have become so fond of eating: tuna, cod, etc.  As a huge, geeky believer in the idea of being connected to your food by knowing where it comes from and the people who produce it, I thought I’d provide a couple of my recipes for local fish.

But first, where to find local fish?  For a few years, I have been cooking fish on Wednesdays procured from my local fisherman, Alex at Blue Moon fisheries, who (or rather, whose retinue of exceptionally cool female employees) sells his catch on that day at the Union Square Greenmarket (on Union Square West at 16th Street).  What is challenging and interesting about doing it is the lack of any of the familiar farmed fish: no salmon or bluefin tuna, and very rarely cod or swordfish.  Instead you have a seasonally changing selection of fish that are swimming in the waters of Long Island: striped bass, bluefish, albacore, dogfish, weakfish, blackfish, monkfish, sea robin, porgies, etc.  As well, there is a selection of Bluepoint oysters, littleneck and quahog clams, and mussels from Shinnecock Bay.  Buying oysters and bringing them home to shuck yourself and slurp down with a little Sancerre, beer, or Champagne is something I highly recommend to endear yourself to your loved ones.

Getting fish from Alex and company and learning what to do with it has taught me a ton about what actually swims in the waters surrounding New York, at what times of year, and at what depths.  It’s also brought me into contact with lots of interesting ideas and local colorations.  Dan Barber sears white fish in lard; it’s super.  When you start doing something regularly, ideas start to flow.  Fried oysters are quick and delicious.  Albacore tuna crusted in sesame and seared on one side, then doused in soy and mirin is amazing.  I make my own canned tuna by packing chunks of the albacore into mason jars with chili, garlic, parsley, and lemon rind, filling it with olive oil, and following the typical canning procedure.  A salade nicoise with the oil from those jars in its mustard vinegarette is pretty special.  Dogfish (a small shark) sandwiches with tomatoes, mayonnaisse, and Tabasco are a hard lunch to beat.  Fish Biriyani with dogfish.  Grilled bluefish.  And on and on.  Find a local fish supplier, read some Alan Davidson, and away you go.  Anyway, herewith, a couple favorites.

CLAM CHOWDER

Jasper White (the godfather of chowder) might dislike this recipe, since it doesn’t contain salt pork, one of the orthodox elements in the litany of New England chowder ingredients.  Well, I developed this recipe while living with a pescetarian, so I didn’t use salt pork.  If you want to use it (and it is great), leave out the garlic and chili mixture and start by rendering the fat from about two tablespoons of finely diced salt pork and then removing them once they have given up their fat and golden-browned.  Drain these on a paper towel and sprinkle the crispy cracklings on the finished dish.  You’ll have an more traditional chowder that relies on the base note of pork fat (always a pretty good idea), though my version substitutes for that flavor pretty well, I believe. 

Three other points: you want potatoes that are starchy, not waxy, because the starch helps thicken the broth. Second, sourdough bread and seafood broth is a godlike combination, so make sure you find a good sourdough loaf.  Finally, my other innovation on the classic method is to remove the clams after cooking and replace them at the end, thus avoiding rubbery-clam syndrome.  If you only have very large clams, you can chop them up roughly prior to putting them back in the soup, though you then lose the sensual quality of perfectly tender, whole clams in your soup.

Serves 4-6

1/2 tsp minced garlic and red chilies in olive oil (if you don’t have it, make do with crushed chili flakes and minced garlic)
1/2 tsp peppercorns, lightly crushed in mortar
1 knob butter
2 tbl olive oil
1 large yellow onion, cut into large dice
two dozen littleneck clams, scrubbed and rinsed
three large starchy potatoes (such as Idaho), peeled and cut into largish slices
1 lb flounder fillets (or other delicate white fish: fluke, halibut, sea bass, hake, cod, etc)
half a bunch of flat-leaf parsley, chopped
half a loaf sourdough bread, sliced and toasted
cream (as much or as little as you like)

Warm the garlic/chili and the peppercorns with the butter and oil in a heavy soup-pot over medium-low heat.  Add the onions and sweat until translucent and flimsy but not browned.

Add the clams and a bit of water (or wine), turn the heat to high and cover.
Open after five minutes and check that all the clams have opened (littlenecks can be a bit reticent, sometimes you have to turn them right side up).  With a slotted spoon, remove the open clams to a bowl.

Now add the potatoes to the pot and enough water to cover them.  Salt generously to avoid stirring too much later.  (Meanwhile take the clams out and discard the shells.)  Boil until potatoes are well cooked. Crush one or two of the cubes into the broth to thicken it a bit.

Add the flounder fillets and cook them in the broth just until they flake apart (2-3 minutes).  Turn the heat off, throw in the clams and any residual broth from their bowl, the parsley, and enough cream to thicken the broth.  Ladle into bowls, and top with sourdough croutons.

BURNT CHOWDER

This recipe is a good example of how small changes in execution can result in completely different flavors.  Here this is achieved by browning the garlic and onion further, which makes for a more potent, deeper broth.  This is then balanced by the addition of crushed red chili flakes and saffron, and smoked fish.

To make it, follow the recipe for Clam Chowder, but substitute two cloves of minced garlic and half a teaspoon of red chili flakes.  Let the garlic brown to a nutty gold (but not actually burnt) before adding the onion, which you should also let become just golden.  Put in less salt.  In place of half the flounder, add half a pound of smoked fish (haddock would be ideal, but bluefish works too).  Lastly, soak some saffron in cream while making the soup, and finish with the saffron-inflected cream.

FISH STEW

This is my version of Bouillabaisse, with some help from the fish stew of Liguria.  Following the French system of bestowing appellations, I do not call my stew bouillabaisse, because the particular fishes required to make a proper one are unavailable to me in New York (particularly the legendarily bony rascasse, without which no authentic bouillabaisse can be made – one of the best pieces of food writing I have ever read is the great A.J. Liebling’s article on the rascasse). Instead I have developed a retinue of seafood found here that together produce a stock of similar complexity.  What’s important is to have a bony bottom feeding fish similar to the rascasse – in my case use the cool and ugly sea robin, a “trash” fish I buy from Alex for $1.50 a pound.  I fillet the fishes myself, but you can also have the fishmonger do it and ask to keep the carcasses.  The leftover meat from the carcasses, once cooked, can be saved for fishcakes.

This is a good dish for a special occasion, when you have friends helping and drinking Bandol or Julienas with you in the kitchen.  I think a seafood stew and its delicious broth are celebratory in a way unlike another big roast of meat – certainly much more more exciting.  For me, a bouillabaisse or other fish stew is an epic poem of the region in which it’s made, a Virgilian georgic of people, fish and work.

Serves 8

2 yellow onions, coarsely chopped
a large bulb fennel, chopped, eight leafy fronds reserved
6-8 ribs celery, chopped
six cloves garlic, minced
10 peppercorns
3 bay leaves
one snapper, filleted, carcass conserved
one sea robin (from the gunard family – or the boniest seafish you can find), whole
one bass, filleted, carcass conserved
1 1/2 pound mussels
1 1/2 pound very small clams (vongole) or cockles
half a bottle Italian white wine (nothing fancy, Pinot Grigio or Orvieto will do)
8 langoustines, or 1 pound of the biggest, coolest shrimp you can afford
3 pounds medium-waxy potatoes, peeled and cubed, or new potatoes
2-3 pounds fillets of very fresh wild striped bass, wild halibut, wild cod or other large-flaked white fish
3 tbl olive oil
red chili flakes
2 cans whole tomatoes, preferably Italian such as San Marzano
1 tsp saffron, preferably the large Iranian available from Truffette
1 loaf best sourdough or country bread (pugliese, batard, etc)
2 cloves garlic, halved

In a large stockpot, add a third of the chopped onion, chopped celery, chopped fennel, and chopped garlic, and all the peppercorns, bay leaves, and fish carcasses.  No salt.  Cover with cold water.  Bring to the boil, break up the carcasses with a wooden spoon.  Simmer slowly for thirty minutes.

During this thirty minutes, in another soup-pot, add the mussels and 1 cup wine, cover and cook over high heat until steam escapes from the top.  Check that all the mussels are open.  With a slotted spoon, move the mussels to a bowl. Remove half the mussels from their shells, discarding shells, and put the mussels on a large platter.  Strain the liquid in the pot into the stockpot.  Repeat the exact same process with the cockles or clams, adding to the platter.

Also during this thirty minutes, put the shrimp or langoustines into the stock pot in a sieve, so they don’t float away, and cook under just done or underdone, less than a minute. Reserve.  Also boil the potatoes in salted water until just done, drain, and add to the platter.  Finely chop the parsley and add to the platter.

Back to the stock.  After thirty minutes elapses, strain the stock into another pot through a sieve.  Remove all solids from the stock pot (after the fish carcasses cool, pick through them for meat, and use it for fish cakes, a mayonnaise-based fish salad, or just a sandwich with tabasco).  Now strain the stock back into the stockpot, put back over low heat and let it simmer, concentrating its flavor.  It should smell pretty great.

Now the actual stew begins.  In a heavy soup-pot, heat the olive oil over medium and add the rest of the onion and fennel, cooking until translucent. Add the garlic and a hefty amount of red chili flakes, and fry until garlic is just off-white and the harshness of its aroma has been attenuated.  Throw in the saffron and stir.  Now add the tomatoes (but not their juices), breaking them up with a spoon.  Cook this down over high heat until you have a medium a little looser than tomato sauce, about 30 minutes.  Now add enough of the concentrated fish stock to make the right volume for your numbers, and salt and pepper exactly right.  Poach the snapper and bass fillets in the stew and break them up a bit.

Thickly slice and properly toast the bread, then quickly rub one side with halved garlic.

Finally, slice the striped bass fillet into portions, one for each eater.  Now poach the cod pieces in the simmering stew broth.  As soon as they’re close to done, remove each to a warmed soup plate.  Now add the entire contents of the platter (mussels, cockles, potatoes, parsley) to the stew and heat as fast as you can.  When back to the boil, add langoustines or shrimp, count to fifteen, and ladle the stew over the cod, placing a langoustine on top, and add half a ladle of the stock to moisten if necessary.  Prettily place a fennel frond on top of each bowl (lying down, not jutting out!) and wedge a slice of garlic-rubbed toast halfway into the soup on the side of each plate.

HOME-CANNED ALBACORE TUNA IN OLIVE OIL

This is a great use for albacore tuna, which is good raw but too lean to be very good cooked, and which appears in great quantities in the late summer and fall.  You can use it for the Salade Nicoise or for my tuna salad recipe below.  It’s way better than regular canned tuna and way cheaper than imported oil-packed Italian bluefin tuna.  Anyway, bluefin is wasted on anything other than sashimi.

Makes one Canning Jar

1/2-3/4 pound albacore tuna, no bones or skin, cut into 4-5 chunks
1 clove garlic, smashed and chopped
two or three red birds-eye chilies, split lengthwise
some sprigs of parsley
a small pinch of thyme
1tsp Maldon salt, or regular salt
6 peppercorns
good olive oil

Sit the tuna chunks in a canning jar (with the hinged top and red rubber seal) on top of the parsley fronds.  Add the rest of the ingredients around and on top of the tuna.  Now pour olive oil on top until it comes above the level of the tuna.  There should be a little room at the sides and top of the jar; you don’t want it packed too tightly.

Now place the jar (or jars) in a large pot and fill with water until the water comes just below the level of the jar’s mouth.  Take the jar out and bring the water to the boil; add the jar again (with tongs), lower heat to the barest simmer, cover and let simmer for 1 hour.  You’re done.  Let cool and refrigerate (just to be safe); it keeps indefinitely, though once you open the jar you should finish it within a few days.  And you will.

SALADE NICOISE

Obviously a classic.

Serves 4.

1/2 pound oil-packed tuna, flaked
2 pounds new potatoes
1 pound asparagus
4 eggs
2 very ripe beefsteak or plum tomatoes
Mixed lettuces, cos, romaine, etc
1 tbl mustard (I use Maille)
1 lemon
Maldon salt
pepper

Boil a pot of well-salted water, add the potatoes, cook until done.  Set aside.  Boil the eggs until softish, set aside, cool, peel and halve.  Blanch the asparagus, set aside.

Cut the tomatoes into quarters lengthwise and sprinkle with Maldon salt.

Make a vinegrette by beating into the mustard some olive oil and some oil from the tuna can and the juice of half a lemon until you have a loose dressing.  Pepper it.

Mix all ingredients gently with dressing to coat.  Either in a large bowl for the table or in individual dishes, arrange all ingredients and squeeze some lemon, grind some pepper and crunch some Maldon salt over them.  Bob’s your uncle.

TUNA SALAD

This is the best tuna salad you can get, perfect for open-faced sandwiches on thick, toasted slices of good bread.  If you don’t have your own canned tuna, use water-packed white meat tuna and add a little more mayo.  I lunch on it and a bowl of dal when writing.

Serves 1 man, or 2 women (inside joke)

1/2 pound home-canned tuna
2-4 bird-eye chilies, finely sliced
thumb of ginger, finely diced
1 tbl parsley or cilantro, finely chopped
1 tbl mayonnaise
a squeeze lemon juice
a lot of black pepper

Combine everything in a bowl.  Break up the tuna some, but not too much.

FISH BIRIYANI

SERVES 6-8 (depending on Allah’s mood – at times it has served 12)

6 curry leaves
2 onions, halved and thinly sliced
4-6 cloves garlic, minced
1 thumb-sized piece ginger, finely diced
3-5 green birds-eye chilies, thinly sliced

1/2 tsp turmeric, 1 tsp ground coriander, 1 tsp red pepper, 1 tsp ginger, 1/2 tsp garam masala OR
4 tsp Shan fish biriyani spice mix OR
4 tsp hot madras curry powder

oil (canola or other vegetable)
salt
1/2 tsp cumin seeds
10 black peppercorns
handful fresh coriander, chopped
some saffron and a bit of milk
16 oz. plain yogurt
2 lbs of firm, white-fleshed fish fillets (I like tilapia or the shark known as dogfish)
2 teacups full of Basmati rice from Pakistan, or India

Soak the saffron threads in milk in a cup.  Fry 1/4 of the onion in oil until dark golden brown (not burnt), then spread on a paper-towel covered plate to dry and become crispy and sweet.  Bring a pot of salted water to the boil, add the rice, simmer for 5 minutes until half cooked through, then drain in a sieve and leave.

Heat 5tbls oil in a large pan and add the curry leaves, cumin seeds and peppercorns when hot, then in 30 seconds the other 3/4 of onion.  Fry till onion is getting dark at edges and light golden.  Add the garlic, ginger, and chilies and fry more for a couple minutes.    Add the ground spice mix, 1 1/2 tsp salt and the coriander and fry for a minute.  Start adding the yogurt starting a little at a time, stirring, and then larger amounts until incorporated and all is bubbling away.  Then reduce heat, add the fish fillets, cook a couple of minutes and turn over.  turn off heat.  You now have a half cooked fish curry, and separately some half-cooked rice.

In a sturdy pot or casserole, add some of the sauce from the curry to the bottom, then 1/3 the rice, then half the fish curry, then 1/3 the rice, then half the fish curry, then top with the last 1/3 rice.  On top, sprinkle most of the crispy onions and a bit of fresh minced ginger, then pour the saffron-milk on top in an X pattern.  Put the lid on and leave to cook at the lowest heat for 30-45 minutes, turning off 10 minutes before eating.  Serve with raita, a hot green chutney, and extra crispy onions on each serving.

Dispatches:

Where I’m Coming From
Optimism of the Will
Vince Vaughan…Eve Sedgwick
The Other Sweet Science
Rain in November
Disaster!
On Ethnic Food and People of Color
Aesthetics of Impermanence

 

Lives of the Cannibals: Isolation

The word derives from the Latin insulatus–made into an island–and it has a nasty sense to it, or so goes conventional thinking. Among its associations: disease, betrayal, failure, separation. It is the fate of the disgraced ruler (Napoleon’s sentence, true to the word’s root), the madman (isolated even from his own limbs by the fastening of straps) and the infected (the soon-to-be-dead, obscured by thick sheets of plastic and extensive breathing apparatus). It is what mothers fear for their children (who must be socially integrated, who must play well with others in order to get along and ahead in the world) and what children fear for their doddering parents (who must be reminded that they still belong to this world). It is a word without much positive association, at least in the minds of most people. We are taught to value plurality, consensus and feedback, and to regard the defiantly singular as suspect.

Such a shame, these negative connotations, especially considering that the word itself is quite properly defined and sourced. But we do understand its associations well–disgrace, insanity, imminent death–and we New Yorkers embrace them. We are (colorfully, proudly) isolation’s wealthy priests, a brotherhood of rejected, contagious madmen–and don’t you shake your head, Cowboy, in your disingenuous shame….I know pride when I see it!–clambering together on several rocks at the edge of the Atlantic. Of course, difference has always been a source of pride, a desirable feature in moderation, something to distinguish (but not to separate). You don’t need a Metro Card to appreciate what’s unique. But it helps. And in fact New York was built for isolation–exquisite machine, and complex, designed to exacerbate difference by density.

Isolation is subjective. There is no observable measurement that guides our estimation of it (its trite signals–social ineptitude, substance abuse, pallor–are too broad, suggesting a host of primary mental and physical disorders to which isolation has been unjustly attached as symptom or result), and yet it is experienced always and only in relation to others. Perhaps that’s why New York is its perfect vehicle. You cannot be isolated from others if there are no others to be isolated from, and fortunately, in this city, there are many, many others (all of them occupying, it seems on some nights, the apartment directly above yours). This is New York’s genius: to pack and load until all around are the bodies and voices of other people, most of whom you will never meet, whose thoughts may or may not coincide with your own, and whose gestures and posture and vocal tone may remind you in some insignificant way of someone you once knew, enough at least to confuse for a moment, to part your lips with the beginning of recognition. The multitude is New York’s special power. Here you will walk the streets and see the face of your best friend, how it was contorted with laughter, and the hands of the man who taught you piano, whose knuckles were enormous; you’ll hear your uncle’s voice, the way it thins its vowels down to string. These recognitions keep you dizzy in the beginning. Then they make you wary and wise. This is how you earn your eyes in New York, the ones that look right past beggars and roll in the wide-open faces of tourists. Things are not as they appear.

Concrete, too, plays a role. The hard surface, a broad palette, does not lend itself to the formation of meaningful human connections. With appropriate irony, we live and work on top of this manmade carapace, choosing to expose rather than protect ourselves, favoring the benefit of an impenetrable surface on which to construct our ambition. It’s better that way–reliable, safe, efficient–and if we imitate its principal characteristic, if we are a touch impervious, then such is the sacrifice we make. We are not here to join hands in fellow feeling.

And there is the anonymity of sophistication, because who would champion fraternité in the thick of such wit and fancy poise? New York City, weary from its better knowledge, is no place to clasp hands and sing songs. Isolation is inherently sophisticated, an exclusive state, and highly transmissible, so it flourishes here, without the annoyance of a lot of mutual identity. When New Yorkers run into each other outside the city, there is acknowledgement, yes, and respect, and even some sense of pleasure at the recognition, but we do not then go out to dinner together. We don’t become friends, no more than we would were we to bump into each other on Seventh Avenue. Such things are for people from Wisconsin. No, sophistication demands restraint, and the city trains us well in that discipline.

It is almost ridiculous to add that the city’s architectural realities reinforce our sense of isolation, so obvious does it seem. The five boroughs offer a wide selection of slots in which we may exist calmly, in compact stacks of residential habitats. We transform warehouses and churches and single-family brownstones into hives of homes, with drywall and wainscoting and original details, and we sit in our rooms and listen to our neighbors, who themselves are listening to their neighbors, who just returned from Elizabeth, New Jersey, with new throw rugs for the kids’ room and a drop-leaf dining table. We covet these small comforts, the better to insulate our tiny segment of space, the better to fashion attractive surroundings, to distract from the stranger who sleeps just inches away, just through that wall, whose obstructed breathing you can hear in the middle of the night. The fabulous terror of isolation is felt best when pressed up against the bodies of millions.

Whatever its ingredients and the means of its formation, New York’s modus operandi and principal issue fuels ambition–professional, creative, romantic. In every moment of individual desperation lies the seed of an artistic triumph, an industrial revolution, an unholy feat of seduction. It is New York’s most appealing paradox–that the greatest of cities maintains its power not by bringing its people together, but by inspiring their isolation.

Lives of the Cannibals: Redemption
Lives of the Cannibals: The Spell of the Sexual
Lives of the Cannibals: Rage

Monday, October 17, 2005

Monday Musing: Leonard Cohen

Skagen

I heard Leonard Cohen’s music for the first time driving from Skagen down to Copenhagen some years ago. Skagen is at the top of the world, at least as far as mainland Europe is concerned. We walked along the beach on a grayish day. You can walk until the sand tapers into a point and then vanishes beneath the North Sea.

Skagen_painter

A group of painters became fascinated with the light and the landscapes there during the mid-nineteenth century. They became known as the Skagen painters. It’s true that the light is special there. It’s diffuse and it’s sharp at the same time, which doesn’t really make any sense but I guess that’s why it is special.

Driving back south again through marshes, dry marshes you pass miles and miles of little trees. My Danish friend told me that the trees are all so small because of the make-up of the soil and all the sand. I have no idea if that’s true or even if my memory is entirely accurate about the trees or the soil. But I have an image of vast stretches of the tiny bare branches of thousands of little trees filtering the strange dying light late in the evening.

We watched the scene from the car window and listened to an album of collected Leonard Cohen songs. We listened to Chelsea Hotel #2:

I remember you well in the Chelsea Hotel,
you were talking so brave and so sweet,
giving me head on the unmade bed,
while the limousines wait in the street.
Those were the reasons and that was New York,
we were running for the money and the flesh.
And that was called love for the workers in song
probably still is for those of them left.

And I remember that we listened to Hallelujah in several different versions.

Now I’ve heard there was a secret chord
That David played, and it pleased the Lord
But you don’t really care for music, do you?
It goes like this
The fourth, the fifth
The minor fall, the major lift
The baffled king composing Hallelujah

Hallelujah
Hallelujah
Hallelujah
Hallelujah

Anyway, with that weird northern light up at the end of the world and those forests of tiny little trees and the mournful throaty singing and those Cohen lyrics that always manage to surprise you with some funny-tragic turn of image it was quite a ride. Not something you’d want to do every day but affecting, memorable.

Well, he has a new album coming out in a week and half called Dear Heather. I don’t know what to expect, really. A lot of his more recent music found him utilizing some rather strange arrangements, cheesy synthesizers and drum machines. I’d say it was all an elaborate con or something but you never can tell with Leonard. He spent the late 90’s at the Zen Center of Mount Baldy where he’d become a monk and took to calling himself Jikan.

He started making drawings and writing poetry like this:
Mystimpr

Seisen has a long body. Her shaved head threatens the skylight and her feet go down into the vegetable cellar. When she dances for us at one of our infrequent celebrations, the dining hall with it’s cargo of weightless monks and nuns, bounces around her hips like a hula-hoop.

This from a man who was said by one journalist to have penned “the most revolting book ever written in Canada.” So, you never know with Leonard. Of course, he seems to have spent much of his time at the Zen retreat drinking, smoking cigarettes, and taking sex breaks. He calls himself a “bad monk, a sloppy monk.” It also turns out that while he was practicing Buddhism up on the mountain his lawyers and accountants stole all his money. When he found out the he’d been fleeced of all but a fraction of his cash he said “You know, God gave me a strong inner core, so I wasn’t shattered. But I was deeply concerned.” For better or worse he’s been stimulated to write a lot of music again and go back into touring.

The lyrics for the song Dear Heather seem promising though, they sound like the Cohen of old, just having gotten a lot older:
Hat

Dear Heather
Please walk by me again
With a drink in your hand
And your legs all white
From the winter

His best music clearly came from a time when he was a miserable wreck. But it will be interesting to hear what comes out of a Leonard Cohen who’s reached 70. Recently, he said of his old age:

There was just a certain sweetness to daily life that began asserting itself. I remember sitting in the corner of my kitchen, which has a window overlooking the street. I saw the sunlight that shines on the chrome fenders of the cars, and thought, “Gee, that’s pretty.”

Fair enough, Jikan. I’ll buy the album.

Monday, October 10, 2005

Critical Digressions: Literary Pugilists, Underground Men

Ladies and gentlemen, boys and girls,

Cover200510_350_2 After being attacked for a number of years by a new generation of literary critics – indeed, sucker-punched, phantom-punched, even body-slammed – “contemporary” (or “postmodern”) prose has hit back: in this month’s Harper’s, one of our favorite publications (less than $10.99 for an annual subscription), one Ben Marcus has donned his fighting gloves – which seem a little big for his hands, his pasty, bony frame – climbed into the ring, earnestly, knocky-kneed, sweating from the hot lights, the camera flashes, the hoarse roar of the audience, the sense of anticipation, broken noses, blood…

Like Dostoevsky’s Underground Man, Ben announces, “I am writing this essay from…a hole…” He continues:

“…it’s my view that…the elitists are not supposedly demanding writers such as myself but rather those who caution the culture away from literary developments, who insist that the narrative achievements of the past be ossified, lacquered, and rehearsed by younger generations. In this climate…writers are encouraged to behave like cover bands, embellishing the oldies, maybe, while ensuring that buried in the song is an old familiar melody to make us smile in recognition, so that we may read more from memory than by active attention.”

Fighting words, ladies and gentlemen! We’d like to tell you that Ben fought a good fight; that he came out swinging; that he staged an upset; that an underdog took on the emerging consensus on contemporary prose, shaped by the likes of James Wood, Dale Peck and B.R. Meyers, and according to Ben, Jonathan Franzen, Tom Wolfe and Jonathan Yardley. But criticism is no fairy-tale world, and Ben is no hero. A welterweight in a heavyweight fight, he doesn’t have enough behind his punch.

The ambitiously titled Why Experimental Fiction Threatens to Destroy Publishing, Jonathan Franzen, and Life As We Know It begins with a peculiar digression on the anatomy of the brain including a quick explanation of the Heschl’s gyri, Boca’s area and the Wernicke’s area (“think of Wernicke’s area as the reader’s muscle”), which may be novel, experimental, but has no business in a literary critique. Perhaps had Ben fused literary theory with neuroscience in a more serious, symbiotic, technically rigorous way, he may have achieved something. But just as Ben gets us thinking about the sort of neural implications of literature, he gets wishy-washy, namby-pamby:  “If we [writers] are successful, we touch or break readers’ hearts. But the heart cannot be trained to understand language…”

Tyson_fingers_shrunk_2 This, introduction may have been overlooked had Ben knocked the reigning heavyweight champions down by the second or third round. But he doesn’t. He quarrels with the prevailing neo-realist sensibilities of critics – that is, “The notion that reality can be represented only through a certain kind of narrative attention – and with those who argue against “literature as an art form, against the entire concept of artistic ambition.” He then has beef with Franzen: “Even while popular writing has quietly glided into the realm of the culturally elite, doling out its sever judgment of fiction that has not sold well, we have entered a time when book sales and artistic meit can be neatly equated without much of a fuss, Franzen has argued that complex writing, as practiced by…Joyce…Beckett and their descendents, is being forced upon readers by powerful cultural institutions…and that this less approachable literature…is doing serious damage to the commercial prospects for the literature industry.” Fair enough but not hard enough.

But though we want to Ben to win this fight because we champion underdogs and such contrarian projects on principle, Ben is quite unable to summon the fierce intelligence and evangelical zeal of, say, James Wood or the flamboyance and shock value of Dale Peck. He may pretend to be the Underground Man but he’s not a sick man…a spiteful man…” In 2001, however, B.R. Meyers, more non-entity than underdog, managed the sort of upset Ben aspires to. Writing in the Atlantic, his thorough, articulate attack began: 

“Nothing gives me the feeling of having been born several decades too late quite like the modern ‘literary’ best seller. Give me a time-tested masterpiece or what critics patronizingly call a fun read – Sister Carrie or just plain Carrie. Give me anything, in fact, as long as it doesn’t have a recent prize jury’s seal of approval on the front and a clutch of raves on the back. In the bookstore I’ll sometimes sample what all the fuss is about, but one glance at the affected prose – “furious dabs of tulips stuttering,” say, or “in the dark before the day yet was” – and I’m hightailing it to the friendly spines of the Penguin Classics.”

A Reader’s Manifesto: An Attack on the Growing Pretentiousness in American Literary Prose caused commotion as the Wall Street Journal, New Yorker, Harpers, NYT, Washington Post and New York Review of Books joined the fray, a real battle royale. And Meyers came out on top: presently, he’s a senior editor at the Atlantic. When you hit hard, it doesn’t really matter what you say. So what’s Meyer’s beef? “What we are getting today is a remarkably crude form of affectation: a prose so repetitive, so elementary in syntax, and so numbing in its overuse of wordplay that it often demands less concentration that the average ‘genre’ novel.” And what is his methodology? He proceeds to categorize contemporary prose in four broad groups – “evocative,” “muscular,” “edgy,” “spare” and “generic ‘literary prose,’” – citing weak passages from the writers who he finds representative of each group; Proulx (Shipping News), McCarthy (All the Pretty Horses), Delilo (White Noise), Auster (City of Glass) and Guterson (Snow Falling on Ceders). Manifestly, Meyers packs a formidable punch.

Peck_1 Of course, even back in 2001, Meyers may have been a non-entity but was no underdog. Literary fashion has been changing well before him with Wood in pages the Guardian, and consequently in the New Republic, where Wood was joined by Dale “The Hatchet Man Peck. Peck, you may remember, famously proclaimed, “I will say it once and for all, straight out: it all went wrong with James Joyce…Ulysses is nothing but a hoax upon literature.” Like Tyson, Peck writes, “Sometimes even I am overwhelmed by the extent of the revaluation I’m calling for, the sheer f***ing presumptuousness of it.” In one critique, in one sentence in fact, Peck excises “most of Joyce, half of Faulkner and Nabokov, nearly all of Gaddis, Pynchon and DeLillo, not to mention the contemporary heirs.” This assertion makes for interesting if idle exercises: we mull, for example, which half of The Sound the Fury Peck would excise if given the opportunity – the first two books, of course, Benjy’s and Quentin’s – and what effect his reductive, retrograde editing would have on the novel as a whole. Peck, like Mike Tyson before him, bites ears off, and often punches below the belt, smack in the crotch. Tyson once said, “I wish that you guys had children so I could kick them in the f***ing head or stomp on their testicles so you could feel my pain because that’s the pain I have waking up every day.”

The New York Review of Books noted that “Like his colleague at the New Republic, the estimable and excellent James Wood, Peck seems to want more novels like the great [19th] century social novels: serious, impassioned, fat.” Were we to step into the ring, brandishing our shiny brown muscles, we would simply but forcefully argue that the world, that civilization, and literature with it, has moved a hundred years forward since the 19th century. Looking fondly back towards realism is quite literally retrograde, like those other Underground Men, Wahabi Islamists urging Muslims to return to the 7th century. The novel, like these critics and the critical canon (that includes the Russian Formalists, the New Critics, Structuralists, the Post-Structuralists, whatever), is grounded in a certain context. It’s is a palimpsest, distilling, processing the anxieties, sensibilities, the diction, the colloquial, news, popular culture, of a particular time and place and people.

Dreiser and Dos Passos, for example – two different writers, the former considered traditional, the latter experimental – were unable to write novels that are relevant today except as history, as part of evolution of the modern novel. On the other hand and off the top of our head, we just finished Roth’s Goodbye, Columbus – his first novel – which features a Jewish protagonist, a class divide, a sectarian divide, specific references and allusions to the fifties in America – including, incidentally, the title itself – but were charmed by the sweet, straightforward adolescent love story (and the voice). Unlike Manhattan Transfer, Goodbye, Columbus remains relevant. Some novels transcend their cultural and temporal trappings.

We dig Roth for different reasons than, say, Melville, Dostoevsky, Dickens. We dig 20th century writers for different reasons than their antecedents: the lyrical and frenetic Marquez and Rushdie, the postcolonial and serious Naipaul and Coetzee, the very contemporary, Franzen and Wallace.

Sure, from Dostoevsky to Wallace, the conventions of storytelling have changed and prose has become more self-conscious but don’t let the Underground Men lecture you that change is good or bad; change is. And we’ll tell you this much: anybody advocating cutting Nabokov down to size should be paraded naked in the ring, weak chest, hairy buttocks, spindly legs exposed, wearing his own novel as a fig leaf. Sure, some contemporary prose has become gimmicky, adjective laden, rife with metaphor (which in a way, is arguably Nabokov’s legacy); and sure, silly alliteration needs to be caught, condemned. Meyers will rightly beat you up for it. That’s his job, and Wood’s and Peck’s. Ben nobly got into the ring but he needs to train harder if he’s going to go twelve rounds with them. Somebody, however, needs to hit back, to keep it real.

As Eddie Scrap Iron Dupris once said (somewhat heavy-handedly), “Boxing is an unnatural act…everything in boxing is backwards: sometimes the best way to deliver a punch is step back…But step too far and you ain’t fighting at all.” We’re not entirely sure if this is relevant but it sure sounds good.

Other Critical Digressions:

Gangbanging and Notions of the Self

Dispatch from Cambridge (or Notes on Deconstructing Chicken)

The Three-Step Program for Historical Inquiry

The Naipaulian Imperative and the Phenomenon of the Post-National

The Media Generation and Nazia Hassan

Dispatch from Karachi

Selected Minor Works: Early Modern Primitives

Justin E. H. Smith

I have recently come across a delightfully obscure 1658 treatise by the very pious John Bulwer, entitled Antropometamorphosis: or, the Artificial Changling. This may very well be the first study in Western history of piercing, tattooing, scarification, and other forms of bodily modification. It is thus a distant ancestor of such contemporary classics as the 1989 RE/Search volume, Modern Primitives.

But if the Voice Literary Supplement once praised RE/Search for its dispassionateness, today a hallmark of respectable ethnography, Bulwer’s science is at once a moral crusade. In each chapter, Bulwer bemoans a different deplorable practice, including “Nationall monstrosities appearing in the Neck,” “Strange inventive contradictions against Nature, practically maintained by diverse Nations, in the ordering of their Privie parts,” and (my favorite) “Pap-Fashions.”

If Bulwer hates nipple rings and dick bars, he is no less concerned about the now rather innocent habit of shaving. He rails in one chapter against “Beard haters, or the opinion and practice of diverse Nations, concerning the naturall ensigne of Manhood, appearing about the mouth.” For him any bodily modification is but a “Cruell and fantasticall invention of men, practised… in a supposed way of bravery… to alter and deforme the Humane Fabrique.”

Bulwer believes that morally degenerate practices can over time lead to actual physical degeneration within a human population. Thus, for him, phenotypic variation in the species is a consequence of cultural bad habits, rather than teleologically driven progress from lower to higher forms, let alone adaptation by way of natural selection. The ugliness of non-Europeans may be attributed to the rottenness of their souls and consequent savage lifestyles. Indian pinheads and Chinese blockheads, whose skulls are sculpted from birth by malevolent adults, are cited as cases of degeneration in action.

200 years before Darwin, then, there was widespread acceptance of the idea that species could change over time. But for moralists such as Bulwer, change could only ever be change for the worse. In this connection, Bulwer denounces the view of a (regrettably unnamed) libertine philosopher that human beings evolved from other primates: “[I]n discourse,” he writes, “I have heard to fall, somewhat in earnest, from the mouth of a Philosopher that man was a meer Artificial creature, and was at first but a kind of Ape or Baboon, who through his industry by degrees in time had improved his Figure & his Reason up to the perfection of man.”

Bulwer believes that the ‘Philosopher’s’ opinion constitutes a symptom of the moral decline of the modern period. For, he thinks, if mutation of humanity over time can occur, it will not, as the Philosopher thinks, take the character of an ascent from beast to man, but rather the reverse, a descent into ape-likeness: “But by this new History of abused Nature it will appeare a sad truth, that mans indeavours have run so farr from raising himselfe above the pitch of his Originall endowments, that he is muchfallen below himselfe; and in many parts of the world is practically degenerated into the similitude of a Beast.”

Evolutionary thinking, then, opens up the possibility not just of progress out of animality, but of degeneration into it, and this was a possibility that the pious, such as Bulwer, were beginning to fear.

If we move forward a few hundred years, we find that the human species still has technology that beats the reed dipped into the anthole, and that we still exercise our freedom to mate outside of estrus. Indeed, not much of anything has changed since the 17th century, either through degeneration or evolutionary progress. One thing that has remained entirely the same is the art of moralistic ranting: we find that, now as then, precisely those who are most concerned about the moral stain of body piercing and tattoos, who are most active in the movement to make visible thongs in suburban Virginia malls a misdemeanor, are the same people who would have us believe that humans were instantaneously and supernaturally created with no kinship relation to other animal species.

It is worth reflecting on why these two crusades, which prima facie have nothing in common, have proven such a durable pair throughout the centuries. I suspect that human thought is constrained (as a result of the way our minds evolved), to move dialectically between two opposite conceptions of animal kinds: that of the book of Genesis on the one hand, positing eternally fixed and rigid kinds with no overlap, and that of Ovid’s Metamorphoses on the other. In spite of the relatively recent ascent of evolutionism to accepted scientific orthodoxy, there has always been available a conception of species as fluid and dynamic. This conception easily captures the imaginations of social progressives and utopians, that is, of those who believe that change for the better is possible and indeed desirable. The numerous monuments to Darwin throughout the Soviet Union (which I hope have not been scrapped along with those to Lenin) were once a testament to this.

Social conservatives on the other hand see fixity as desirable, and tend to conceive of change in terms of degeneration. A bestiary of eternal, non-overlapping animal species would provide for them a paradigm of stability that could easily be carried over from the natural to the social world, while the loss of this fixed taxonomy of natural kinds would seem equally to threaten the social stasis the conservative seeks.

The prospect of change in species over time, then, –including the human species– will be a more useful way of conceptualizing the natural world in times of heady social upheaval; in political climates such as the current one, it is not surprising to see public figures shying away from the chaotic instability of the Metamorphoses in favor of the clear boundaries of the Old Testament.

I am not saying that evolution is just ideology. I believe it is true. I believe that creationism, in turn, is false, and that it is an ideology. And precisely because it is one, it is a waste of time to do intellectual battle with creationists as though they had a respectable scientific theory. Instead, what we should focus on is the rather remarkable way in which folk cosmology –whether that of the Azande, the ancient Hebrews, or Pat Buchanan– may be seen to embody social values, and indeed may be read as an expression on a grand scale of rather small human concerns.

The small human concerns at the heart of the creationist movement are really just these: that everything is going to hell, that the kids don’t listen to their folks anymore, that those low-cut jeans show far too much. Creationism is but the folk cosmology of a frightened tribe. This is also an ancient tribe, counting the authors of Genesis, Bulwer, and Buchanan among its members, and one that need be shown no tolerance by those of us who recognize that change reigns supreme in nature, and that fairy tales are powerless to stop it.

Monday Musing: Be the New Kinsey

Last week my wife and I saw the biopic Kinsey in which Liam Neeson plays the entomologist turned pioneering sex researcher, Dr. Alfred Charles Kinsey. It’s a pretty good movie. Rent the DVD. Kinsey spent the early part of his career as a zoologist studying gall wasps, on which he became the world’s foremost expert. He collected over one million different specimens and published two books on the subject. Then, at some point in the early 30s, while pondering the variety of sexual behavior in the wasps he was studying, he started wondering about the range and variety of sexual behavior in humans. When he looked into it, he was dismayed by the prevalent scientific ignorance about even very basic physiological sexual function in humans, much less complex sexual behaviors. Remember, this was a time when even college-age men and women often had very little information about sex.

TimekinseyBut in this vacuum of knowledge where the angels of science feared to tread, as usual, the ever-confident fools of myth and dogma had rushed in with loud proclamations such as: masturbation causes blindness; oral sex leads to infertility; the loss of one ounce of precious, life-giving semen is equivalent to the (rather more obviously enervating) loss of 40 ounces of blood; and so on and on. We’ve all heard these sorts of things. In addition there was very little information about more real dangers and risks of sexual behavior, such as venereal disease transmission. When Kinsey taught a “marriage” class at the University of Indiana at Bloomington, a type of early sex-ed, his students often asked him questions that he could not answer because the answers simply were not known.

Embarrassment at this state of affairs prompted Kinsey to action. As an accomplished ethologist (one who studies animal behavior in the natural environment) he realized that in addition to studying the physiological sexual equipment of humans and the mechanics of sexual response, it was important to compile data on human sexual behavior “in the wild”, and he undertook the prodigious task of conducting detailed surveys of men and women across the 48 states of America to compile statistics about their sexual behavior. He didn’t reach his goal of interviewing more than a hundred thousand people, but he did make it to the tens of thousands. I cannot go into the details of his methodology here but it has been fairly well defended. In 1948, Kinsey published the first of two books based on his exhaustive sex research that would eventually alter the course of this country in significant ways: Sexual Behavior in the Human Male. Five years later he published Sexual Behavior in the Human Female.

What Kinsey found is well-known now but was absolutely scandalous at the time: the prevalence of homosexuality and bisexuality; the ubiquity of masturbation, especially in males; the common practice (at least much more than anyone had previously thought) of oral sex, anal sex, premarital sex, infidelity, and other forms of “deviant” behavior. While Kinsey simply reported the raw data, never advocating social policy or drawing moral conclusions, the effects of an open airing of these previously taboo subjects had far-reaching effects, not only contributing significantly to the sexual revolution of the 60s, but, importantly, resulting in the eventual massive decriminalization of various sexual practices such as (but not limited to) sodomy and oral sex across the states. There were other results as well: it took until 1973 for the American Psychiatric Association to remove homosexuality from its list of mental illnesses, but it happened at least partly because of Kinsey’s work. Most significant of all, however, Kinsey’s reports went a long way toward lifting the clouds of ignorance and fear that had long whelmed sex.

Now it occurred to me after I saw the movie, that there is an area of human practice (and, yes, it is more-or-less universal) which is today covered in the same clouds of ignorance and fear; which has distorted the well-intentioned aims of the criminal justice system and is filling up our jails; and which is dominated by myth and dogma in much the same way sex was before Kinsey had the courage to defy the taboos surrounding it and clear that fog with his bright beam of information: it is drug use.

What is drug use? I shall define it here for my purposes as a consumption (whether by ingestion, inhalation, injection, absorption or any other means) of a substance with little or no nutritional benefits, simply out of a desire for its effect on the nervous system. This then includes substances ranging from caffeine and nicotine, to alcohol, marijuana, LSD, PCP, ecstasy, cocaine, crystal meth, heroin, and the thousands of other substances that people use to enhance, or at least alter, their subjective experience of the world. And I won’t even get into prescription drug (ab)use from Valium to Viagra. Extremely large portions of all the cultures that I know of use at least some of these and other substances. Of course, most people enjoying a friendly chat over tea are not explicitly aware that they are taking a drug, but they are, and it makes them feel better, more energetic, and more awake. Just as some sexual practices are relatively harmless, while others pose real dangers to those who practice them, certain drugs and related practices are more dangerous than others.

Here’s the problem: there is very little useful information available about drugs. The reason is that there is a reluctance bordering on taboo on the part of government and non-government agencies alike to actually spell out the risks of taking drugs. In the case of illegal or other “bad” drugs, there is an absolute refusal to accept the fact that a large part of the population uses and finds pleasure in these substances, and an attempt to marginalize all drug users as criminals, addicts, and degenerates; just as in Kinsey’s time, absolute abstinence is at present the only acceptable message for the public. “Just say no!” That’s it. Where there should be informed messages of the exact risks of various substances, there is fear-mongering instead: the “this is your brain on drugs” ad on TV showing a frying egg, for instance. The implication is, just as an egg cannot be unfried, once you have used drugs (which drugs? how much? for how long? –these questions are not to be asked, and cannot be answered), your brain is permanently fried, whatever that means. After all, a fried brain is just a metaphor, it does not say anything scientific about exactly what sort of damage may be done to one’s brain by how much of what drug over what period of time. “Cocaine Kills!” is a common billboard ad. Have you noticed that Kate Moss hasn’t died? Why? I happen to know a bunch of Wall Street types who have been snorting lines off little mirrors in the bathrooms of fancy downtown clubs pretty much every weekend for at least a decade, and so, probably, do you. None of the ones I know have died so far. I also know a man who tried cocaine for the first time and ended up in an emergency room. So what is the real risk?

The problem with just telling people “Cocaine Kills!” and nothing more is that because they may see many people doing cocaine who are not dropping like flies, they completely dismiss the message as a crying of wolf. Or, they may think, “Yeah, sure cocaine kills, but so does skiing. Think Sonny Bono. Think Michael Kennedy. Just say no to skiing? I don’t think so. The pleasures outweigh the risks for me.” Why not tell them something useful about what the real statistical risks are? What percentage of the people who do it die from cocaine? What are the chances of dying for every ten times, or a hundred, or a thousand that you take cocaine? In the almost religious zeal to curb smoking, even there the situation about the actual risks is endlessly confusing. I have repeatedly read that 9 out of ten people who have lung cancer were smokers, but this tells me nothing about what risk I am taking of getting lung cancer by smoking. It could be that only a small percentage of the population gets lung cancer, and of those, the smokers are at disproportionately higher risk. There are hugely inflated figures of the number of deaths caused by smoking which are routinely thrown around. I have even seen a poster claiming that 3 million people are killed every year in the U.S. by smoking. That’s more than the total number of deaths every year in the U.S.! What I would really like to know is, on average, how much I am shortening my life with every pack-year of cigarettes I smoke? I just looked at various websites such as those of the American Heart Association, the Centers for Disease Control, the World Health Organization, and countless others, and I cannot find this simple information. Why? Just as Kinsey could not answer many of his students’ questions about sex, if a young person today asks just how risky it is to use ecstasy at a rave, for example, we have nothing to say.

Another problem with this “just say no” approach is that just as the total abstinence approach masks the differences in danger to one’s health of different sexual practices (having frequent unprotected sex with prostitutes is obviously not the same risk as having protected sex with a regular partner) because they are equally discouraged, this approach also masks the differences between the various practices of using drugs. Smoking a joint is not the same as mainlining heroin, but there is no room for even such crude distinctions in this simplified message. There is only the stark line drawn between legal and illegal drugs. Go ahead and have your fourth martini, but, hey, that hash brownie will kill you!

The same unrealistic refusal to acknowledge that drug use is very common (yes, there are always a few polls of high school sophomores and college freshmen, but nothing serious and comprehensive) across all strata of society results in a distorted blanket approach to all drug use, and the same ignorant fear-mongering that used to exist about sex. The first thing to do is to compile, like Kinsey, detailed information on all drug use (or at least the top twenty most used drugs) by employing the best polling techniques and statistical methods we have. Let’s find out who is using what drugs, legal or illegal. Break it down by age, gender, race, income, geographical location, education, and every demographic category you can think of. Ask how often the drug is taken, how much, in what situations. Ask why the drug is taken. What are the pleasures? Poll emergency rooms. Research the physiological effects of drugs on the human body. Write a very fat book called Drug Taking Behavior in the American Human. I am not advocating any policy at all. I am just advocating replacing ignorance and confusion with irrefutable facts. New directions will suggest themselves once this is done. Maybe just as people who engage in oral sex are no longer seen as perverts and degenerates, maybe one day Bill Clintons won’t have to say “But I didn’t inhale,” and George W. Bushes won’t have to lie about their cocaine use. On the other hand, maybe we will decide as a society that Muslims were right all along, and ban alcohol. Go ahead, be the new Kinsey.

Have a good week!

My other recent Monday Musings:
General Relativity, Very Plainly
Regarding Regret
Three Dreams, Three Athletes
Rocket Man
Francis Crick’s Beautiful Mistake
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President

Monday, October 3, 2005

From the Tail: Betting on Uncertainty

I think I know where you stand on the ongoing federal court case in Pennsylvania, where parents have sued to block the teaching of intelligent design in their schools. Your position notwithstanding, only 13% of the respondents to a November 2004 Gallup poll believed that God has no part to play in the evolution or creation of human beings. Fully 45% said they believe that God created humans in their present form less than 10,000 years ago!

What’s going on here? Many (perhaps even a majority) of these respondents were taught evolution in school. Did they choose to disregard it merely because it contradicted their religion? They do seem to accept a whole host of other things during the course of their education which may contradict it as well. For example, there appears to be far less skepticism about the assertion that humans occupy a vanishingly small fraction of the universe. I’ll throw out three other explanations that are often advanced, but which I believe to be inadequate as well:

  1. Natural selection is not a good enough explanation for the facts: Clearly, it is.
  2. Natural selection has not been properly explained to the general public: Sure there are common misconceptions, but proponents have had enough school time, air time and book sales mindshare to make their points many times over.
  3. Religious zealots have successfully mounted a campaign based on lies, that has distorted the true meaning of natural selection: This has conspiracy theory overtones.  There are too many people who do not believe in natural selection — have they all been brainwashed?

My explanation is simply this: Human beings have a strong visceral reaction to disbelieve any theory which injects uncertainty or chance into their world view. They will cling to some other “explanation” of the facts which does not depend on chance until provided with absolutely incontrovertible proof to the contrary.

Part of the problem is that we all deal with uncertainty in our daily lives, but it is, at best an uncomfortable co-existence. Think of all the stress we go through because of uncertainty. Or how it destabilizes us and makes us miserable (what fraction of the time are you worrying about things that are certain?). In addition to hating it, we confuse uncertainty with ignorance (which is just a special case), and believe that eliminating uncertainty is merely a matter of knowing more. Given this view, most people have no room for chance in the basic laws of nature. My hunch is that that is what many proponents of Intelligent Design dislike about natural selection. Actually, it’s more than a hunch. The Discovery Institute, a think tank whose mission is to make “a positive vision of the future practical”, (but which appears to devote a bulk of its resources to promoting intelligent design) has gotten 400 scientists to sign up to the following “Scientific Dissent from Darwinism“:   

We are skeptical of the claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.

In this world of sophisticated polling and sound bites, I think that the folks at the Discovery Institute have gotten their message down pat. To be sure, natural selection is not a theory of mere chance. But without uncertainty it cannot proceed. In other words, Natural Selection is a theory that is not of chance, but one that requires it.  The advocates of Intelligent Design are objecting to the “purposeless” nature of natural selection and replacing it with the will of a creator. It doesn’t really help matters for Darwinians to claim that chance plays a marginal role, and that the appeal to chance is a proxy for some other insidious agenda. Chance is the true bone of contention. In fact, as Jacques Monod put it over thirty years ago:

Even today a good many distinguished minds seem unable to accept or even to understand that from a source of noise, natural selection could quite unaided have drawn all music of the biosphere. Indeed, natural selection operates upon the products of chance and knows no other nourishment; but it operates in a domain of very demanding conditions, from which chance is banned. It is not to chance but to these conditions that evolution owes its generally progressive course.

The inability of otherwise reasonable people to accept a fundamental role for randomness is not restricted to religious people — scientists are hardly immune to it. We know that even Einstein had issues with God and dice in the context of Quantum Mechanics. Earlier, in 1857, when Ludwig Boltzmann explained the Second Law of Thermodynamics by introducing, for the first time, probability in a fundamental law, he was met with extreme skepticism and hostility. He had broken with the classical Newtonian imperative of determinism, and so could not be right. After much heartache over answering his many critics, Boltzmann (who had been struggling with other problems as well) hanged himself while on holiday.

Of course one reason we hate to deal with uncertainty is that we are so ill equipped to do so. Even when the facts are clearly laid out, the cleverest people (probabalists included) make mistakes. I can’t resist providing the following example:

William is a short, shy man. He has a passion for poetry and lives strolling through art museums. As a child he was often bullied by his classmates. Do you suppose that Williams is (a) a farmer, (b) a classics scholar?

Everyone I ask this question chooses (b). But that isn’t right. There are vastly more farmers than classics scholars, and even if a small fraction of farmers match William’s characteristics, that number is likely to be larger than the entire set of classics scholars. (Did you just get burned by your meager probabilistic reasoning faculties?) The psychologists Kahneman and Tversky pioneered the field of behavioral economics, which establishes among other things that our heuristics for reasoning about uncertainty are quite bad. You can probably think of many patently dumb things that people have done with their money and with their lives when a simple evaluation of the uncertainties would have resulted in better outcomes.

So back to getting people to accept uncertainty as an inherent part of the world. As you can probably tell, I am not holding my breath. On evolution, the timescales are too long to be able to provide the incontrovertible proof to change most people’s minds. Maybe a better approach is to reason by analogy. There is an absolutely staggering amount of purposeless evolution unfolding at breakneck speed before our very eyes. I am talking about the Web, the very medium through which you are reading this. In only about ten years a significant portion of the world’s knowledge has become available, is almost instantaneously accessible, and it’s free. Consider these figures from a recent article by Kevin Kelly. The thing we call the Web has

  • more than 600 billion web pages available, which are accessible by about 1 billion people.
  • 50 million simultaneous auctions going on on Ebay,  adding up to  1.5 billion a year.
  • 2 billion searches a month being done on  Google alone.

Think back to what you were doing ten years ago. Did you ever really think that any of this would happen? The scale at which the internet operates was envisioned by none of the engineers and computer scientists who collaboratively attempted to design the basic substrate of protocols upon which it runs. In truth, the innovations and designs of the web come from the collective energies of its users, and not according to an intelligent design or a blueprint. Here the purposeless of evolution is much easier to see. One day in the future some theory will reveal as a simple consequence, why all of a sudden in the years 2004-05, there sprung up 50 million blogs, with a new one coming on line every 2 seconds. This theory of evolution will be framed by a Law and this law will have at its core an indelible, irreducible kernel of chance. And chances are, most people will have a hard time believing it.

Monday Musing: Enchantment and pluralism, some thoughts while reading Jonathan Strange & Mr. Norrell

Throughout much of the writings of the German sociologist Max Weber, you can find the claim that modernity and its rational control over the natural demanded the disenchantment of the world; that is, the exit of the sacramental in material things and the end of sacrament as a means (or rather appeal to the world) to fulfill our roles and ends. The role of the religious and the spiritual dwindle. Science and technology displace magic. But specifically, it displaces magic in the realm of means.

Weber saw this mostly in the rise of capitalism and the modern bureaucracy and in the Protestantism that has, or had, an “elective affinity” to modernity itself.

Only ascetic Protestantism completely eliminated magic and the supernatural quest for salvation, of which the highest form was intellectualist, contemplative illumination. It alone created the religious motivations for seeking salvation primarily through immersion in one’s worldly vocation. . . For the various popular religions of Asia, in contrast to ascetic Protestantism, the world remained a great enchanted garden, in which the practical way to orient oneself, or to find security in this world or the next, was to revere or coerce the spirits and seek salvation through ritualistic, idolatrous, or sacramental procedures. No path led from the magical religiosity of the non-intellectual strata of Asia to a rational, methodical control of life. (The Great Religions of the World)

And that pinnacle expression of and institution for methodical control of the world, the bureaucracy, was notable, according to Weber, precisely for its irreligion.

A bureaucracy is usually characterized by a profound disesteem of all irrational religion . . .(Religious Groups)

Reading Susanna Clarke’s Jonathan Strange & Mr. Norrell, which admittedly I’m only half-way through, I was reminded of Weber (which is not so uncommon). The novel, set in the early 19th century, concerns the reappearance of magic in the modern world. In the novel, magic existed once upon a time, but had disappeared three centuries earlier, at the end of the Middle Ages. Against the backdrop of the Napoleonic Wars, two practicing magicians appear in England—a re-enchantment, of sorts.

Prior to the appearance of the two practical magicians, magic is purely theoretical, the occupation of historians and scholars, but not practitioners. Interestingly enough, these historians and scholars in the novel are also called “magicians.” The magic societies resemble philosophy circles and salons. And the idea of magic in the novel as a metaphor for philosophy is an obvious one, if only because the line between magic and philosophy seems so blurry in the Middle Ages. Merlin certainly appears a philosopher magician, a sage.

The two, Jonathan Strange and his teacher Mr. Norrell, lend their services to the war effort, and we are given an image of magic interacting with the specialized, but also distant and abstract, knowledge of bureaucracy. And it’s a funny image: two separate relationships to means in conflict, with neither depicted in a flattering way.

Enchanted (or mysterious) means don’t seem any more sensible or effective than dis-enchanted (rational, methodical) ones. (At least so far.)

(And I was also disappointed to learn that the connection between “wizard” and “vizier” is accidental.)

I was thinking of these issues in the context of a larger one: namely, why does so much fantasy appear to be conservative. The Lord of the Rings seems clearly to be conservative in its politics, not just Tolkien. And by conservative, I don’t mean that it simplifies politics but rather it harkens back to a time before a monistic conception of the good—as given by religion, usually—collapsed in favor of the pluralism of ends that we enjoy and which defines the freedom of the moderns. To follow John Holbo and invoke Isaiah Berlin, people disagree with the ends of life and not just the means. And the modern world has been set up to allow people to disagree and live their lives in the way they like without too much conflict, at least ideally.

There are exceptions to my claim that fantasy seems to go with conservatism, to be sure: Buffy the Vampire Slayer, for one. But it does seem that the practical representation of magic often takes place against the backdrop of, at least, a locally all-embracing purpose, most commonly war. It’s almost as if the absence of a methodical control of life and the world requires that the ends of life are controlled thoroughly. Conversely, the rationalization of the world appears to go part and parcel with the pluralism of ends. (Of course, Weber, and some of those he inspired including the Marxist Frankfurt School, was terrified that values—monistic or plural—would exit altogether from the modern world under its rationalization, and means would become ends in themselves. Although, it seems that no one can give an example other than the accumulation of money or commodities.)

At least so far, Clarke seems to avoid the conundrum, or appears to make fun of the genre’s political naiveté. (It apparently gets even better, in terms of political richness.)  And it seems to me that to the extent that the backdrop of fantasy can shift from the Wagnerian saga into the quotidian, magic can find a place in the modern world.

Lives of the Cannibals: Redemption

On May 29, 1983, Steve Howe, a 25 year-old relief pitcher for the Los Angeles Dodgers, checked himself into a drug rehabilitation center to treat an addiction to cocaine. Howe was a promising young star, 1980’s rookie of the year, and endowed with the hyperactive, pugnacious demeanor of a natural-born “closer,” the pitcher charged with saving tight games in treacherous late-inning situations. He completed his rehab in late June, but was sent away again in September after missing a team flight and refusing to submit to urinalysis. He tested positive for cocaine three times that November, and was suspended from baseball for the 1984 season, one of several players caught up in the decade’s snorty zeitgeist. Howe returned to the mound in ’85 and over the next 6 years pitched sporadically for the Dodgers, the Minnesota Twins and the Texas Rangers, as well as a Mexican League team and a couple of independent minor-league level clubs in the States. But June of ’92 found Howe busted again, and Fay Vincent, then the commissioner of baseball, banned him for life. An arbitrator later vacated Vincent’s decision, reinstating Howe, and the New York Yankees signed him to pitch in the Bronx. After Yankee relievers suffered a mid-season collapse in 1994, Howe stepped into the breach and, notwithstanding his caged pacing and myriad facial tics, recorded 15 clutch saves and a 1.80 earned run average, winning the enduring affection and respect of Yankee fans, who have a proud history of adopting the troubled and eccentric, just so long as they win.

Welcome to New York, perhaps the most prolifically redemptive island in human history. Granted, islands are built for redemption. Their isolation and exclusivity require new beginnings from their inhabitants, and they tend in general (and New York’s islands tend in particular) to transact life on terms different from other places. In the City, where the hybrid system runs on aggression, aplomb and sex appeal, fatuous Wall Street wizards and Upper-East Side tastemakers serve prison sentences and emerge hotter than ever, redeemed not by God or humanism but by the very fact of their fall from grace. It’s exotica, a matter of salacious interest and a perfect bluff for the social scene, where a big rep is all it takes, and the smart ones ride theirs all the way to a clubby write-up in Talk of the Town. Sure, a prison term is a nuisance, but it’s also useful (if bush-league) preparation for the more exigent realities of life in Manhattan. So it’s no surprise that we should admire the same things in our more middle-class heroes–our athletes and actors, and our politicians too. We want contrition, of course, and we must remember the children, but a little imperfection makes for a compelling character, and we won’t have that sacrificed.

The New York Yankees opened their 2005 season 11-19. It was the worst start anyone could remember, and it came on the heels of the greatest collapse (or comeback, depending on your regional perspective) in baseball history, when, in the second round of the 2004 playoffs, the Yankees were eliminated by the Red Sox despite winning the first three games of a best-of-seven series. In every one of the last nine years, they had made it to the playoffs, and in every one of the last seven, they had won the American League’s Eastern Division title, but 2005 seemed different. They were paying 15, ten and seven million dollars to three starting pitchers of dubious value–Brown, Wright and Pavano–and they had purchased the super-rich contract of Randy Johnson, once inarguably the finest pitcher in the major leagues, but now, at 41, a cranky and unreliable prima donna, whose 6’7 frame and acne-scarred face looked pained and out of place in Yankee pinstripes. Their beloved veteran center fielder Bernie Williams couldn’t throw anymore, and their traditionally solid bullpen hemorrhaged runs nightly. It was a difficult reality for fans who had been treated to a decade of near-constant success, but it was manna for the millions of Yankee haters, whose unfailing passion evinces the team’s historical greatness and cultural significance. In the wake of their ignominious 2004 defeat at the hands of the Red Sox, and finding themselves in last place in the American League East, the Yankees and their fans despaired. It was over.

Enter Jason Gilbert Giambi and Aaron James Small, high school classmates from California and unlikely Yankee teammates, whose personal redemptions spurred the 2005 Yankees to their eighth consecutive division title on Saturday. Giambi, a longtime star slugger, is one of the few known quantities in the recent steroid controversy (and Capitol Hill comedy, where the workout regimens of professional athletes have curiously attained massive political profile), whose leaked congressional testimony marks him as a confirmed (though not explicitly stated) user. Giambi spent most of 2004 on the Yankees’ disabled list, recovering from mysterious fatigue and a suspicious tumor, both of which, it seemed likely to pretty much everyone who gave it any thought, might just be the rightful wages of sticking a hypodermic needle in your ass and suffering nascent breast development, in exchange for increased strength and the ability to heal faster (a superhero’s tradeoff). But if nothing else came clear in 2005, at least Jason Giambi wasn’t on the juice. Never did a hitter look more helpless at the plate than poor Jason. He flailed and whiffed, and the earnest cheerfulness that once endeared him to fans and teammates curdled into delusive optimism. He was done.

But he wasn’t. Through the first two months of the season, Giambi claimed to be on the right track. He still had his good eye, he pointed out, referring to all the walks he earned, and it was just a matter of timing and bat speed after that. Fans and the media were indulgent but skeptical. The Yankees are a known rest-home for aging, overpriced talent, and Giambi’s story, though more dramatic than the trajectory of your average baseball player’s decline, did fit the profile. But, much to everyone’s surprise, he started hitting again, and what he started hitting were home runs–tall flies that took ages to land, and missiles that slammed into the bleachers moments after cracking off his bat. Giambi began driving in runs at a faster pace than anyone else on a team full of standout run-producers, and he continued reaching base on the walks that served as his crutch in those first miserable months, all of which amounted to league-leading slugging and on-base percentages. Jason was redeemed, and his legend is assured now as the star who wanted more, who lost everything to greed and arrogance, and who recovered his glory, which is now vastly more appealing for the fact that it’s tarnished. It’s a real New York kind of story.

As for Aaron Small, his is a story of redemption too, but one more suitable for middle America, which might not take so kindly to the resurrected likes of Steve Howe and Jason Giambi. Like Giambi, Small is a 34 year-old baseball veteran, but a veteran of the minor-leagues, whose only pro success has been the several “cups of coffee” (as baseball cant has it) he’s enjoyed in the majors in 16 years of playing–short stints in the bigs, followed by interminable bus rides back to the minors. This year, Small was called up to plug the holes left by the Yankees’ multimillion-dollar washouts, Brown, Wright and Pavano. Small, it should be noted, is the type of guy who thanks God for minor successes, a tendency not uncommon in local basketball and football players, but one that seems exceedingly peculiar in a glamorous Bronx Bomber. Nevertheless, he has been embraced by New York fans, and their acceptance has everything to do with the ten victories he compiled (against no defeats) in his partial 2005 season. This modest, Southern country boy outpitched every high-priced arm the Yankee millions could buy, and after every game he shucksed his way through interviews, praising his patient wife, praising his remarkably attentive savior, and just generally expressing his shock and pleasure at finding himself in the heat of a big-league pennant race after more than a decade-and-a-half of slogging his way from minor-league town to minor-league town. Small’s story is relevant here because his time is short. His 16-year patience, his redemption, will not remain in the minds of New Yorkers very long, not unless he does something colossally self-destructive–and he better do it quick. We like a little dirt on our heroes, a little vulgarity, because otherwise it’s all hearts and flowers and straight-laced (and -faced) fortitude, and what could be more dull? New York takes pride in its corruptions, and a hero isn’t a New York hero until he’s been dragged down and beaten (preferably by his own hand).

And this is why the 2005 Yankees have a shot at being the most memorable team to come out of the City in years. They’ve seized every opportunity to make things hard this season. Every potential run-scoring at bat, every pitching change and every difficult fielding chance has come with the sour taste of unavoidable failure, the sense that we’re almost out of gas now after a decade at the top. Our trusty veterans have lost their vigor and our big-name stars are compromised–by their egos, their paychecks and their tendency to choke. The obstreperous owner is lapsing into dementia, and even Yankee Stadium itself has entered its dotage. Indeed, what we’re confronted with is the last, limping formation of a great baseball team, occasionally disgraced by its swollen personalities and bottomless, ignorant pockets, trying to fashion for itself a true New York-kind of glory–one that climbs out of the depths, battered and ugly. This is our redemption.

Lives of the Cannibals: The Spell of the Sexual
Lives of the Cannibals: Rage

Poison in the Ink: The Life and Times of Fridtjof Nansen

In 1906 Santiago Ramòn Y Cajal and Camillo Golgi shared the Nobel Prize in Physiology or Medicine for their contributions to neuroscience: Cajal for his contributing work that helped lay the foundation for the Neuron Doctrine, and Golgi for the development of his Golgi stain which was crucial for the work of so many neuroscientists, including Cajal. Unknown to most people however, is that a Norwegian zoologist named Fridtjof Nansen had declared the independence of the cellular nerve unit a year and a half earlier than Cajal, using the same Golgi stain employed by the Spanish histologist. When Cajal was just beginning to learn about the staining technique from a colleague, Nansen had already published a paper stressing the point.

On October 26, 1892, a crowd gathered for the christening of the Fram, a custom-built ship designed to take Fridtjof Nansen and his crew to the roof of the world. Four years had passed since Nansen had become the first European to cross the interior of Greenland, and he now hoped to win the race of becoming the first to reach the North Pole.

Among the guest present at the event was Gustaf Retzius, a colleague from Nansen’s early days as a neuroscientist. During a speech made at dinner that night, Nansen turned to Retzius and said that the field of neurobiology, like polar exploration, involved “penetrating unknown regions” and he hoped one day to return to it.

For all of his good intentions, Nansen never did return, and it would be something he would express regret over many times throughout his life. As he put it, after “…having once really set foot on the Arctic trail, and heard the ‘call of the wild’, the call of the ‘unknown regions’, [I] could not return to the microscope and the histology of the nervous system, much as I longed to do so.”

Those familiar with Nansen probably know him as an arctic explorer and as a world-famous diplomat who was awarded the Nobel Peace Prize in 1922 for his efforts to repatriate refugees after World War I.

But before the arctic expeditions and the humanitarian work, Nansen was a young zoologistNansen_lab_5 interested in biology and the nervous system. He was one of the world’s first modern neuroscientist and one of the original defenders of the idea that the nervous system was not one large interconnected web, but instead was made up of individual cells that Wilheim Waldeyer would later call “neurons” in his famous 1891 Neuron Doctrine.

From a young age, Nansen was fascinated with nature; he loved its “wildness” and its “heavy melancholy” and he was happiest when he was outdoors. When it came time for Nansen to enter the University of Christiania (currently known as the University of Olso), he chose to major in zoology.

During his first year, Nansen answered a call from his department for someone to visit the arctic and collect specimens of marine life. In 1882, he set off for the east coast of Greenland aboard the sealing vessel Viking on a voyage that would last four and a half months.

The trip was a unique turning point in Nansen’s life. It provided him with his first glimpse of the Arctic and instilled in him the desire to cross Greenland’s icy interior.

“I saw mountains and glaciers, and a longing awoke in me, and vague plans revolved in my mind of exploring the unknown interior of that mysterious, ice-covered land,” Nansen wrote.

Upon his return, the 20-year-old Nansen was offered a post as the curator of the zoological department at the museum of Bergen. Nansen gladly accepted the position. His arctic dreams were put aside and the next six years were spent studying the invertebrate nervous system through a microscope.

One of the greatest difficulties Nansen faced in his research involved staining sections of nerve tissue. With the methods available at the time, the most that could be revealed of a neuron was its cell body, the proximal—and sometimes secondary—branch-like extensions of its dendrites and the initial segments of its thread-like axon.

At around this time, word was circulating that an Italian physician named Camillo Golgi had developed a new staining technique, one that stained only a few nerve cells in a section at a time, but which stained them so thoroughly that they were visible in their entirety.

After catching wind of the Golgi’s technique from a colleague, Nansen decided to pay the Italian doctor a visit. Despite arriving unannounced at Golgi’s lab in Pavia, Nansen was surprisingly well received and under the doctor’s careful tutelage, Nansen mastered what would become known as the Golgi stain in only a matter of days.

Upon his return, Nansen applied the Golgi stain to the nerve cells of a primitive fish-like animal called the lancelet. For the first time, Nansen could see clearly all the intricate branches of a neuron’s dendrites and could follow the entire length of an axon before it made contact with another neuron.

Armed with this new tool, Nansen began seeing things that couldn’t be explained by the reticular network theory, the reigning theory at the time for how nervous systems were organized. According to this theory, the nervous system was like a giant mesh net, with nerve impulses—whatever they might be—traveling unimpeded from one area to another.

One of Nansen’s objections to this view was based on a simple anatomical observation. The existences of unipolar neurons, or unipolar “ganglion” cells as they were known at the time, puzzled Nansen and lead him to ask a very logical question: How could unipolar neurons exist if nerve cells fused into one another as commonly believed, he asked. “How could direct combination between the cells be present where there are no processes to produce the combination?”

As their name suggests, unipolar neurons have a single primary trunk that divides into dendrites and an axon once away from the cell body. This is different from the image of neurons that most people are accustomed to, which show numerous dendrites branching off the cell body at one end and a long thread-like axon, terminating in tiny knobs at the other.

Other scientists attempted to explain away unipolar neurons by arguing that they were not very common. The closer the nervous system was examined, they said, the fewer unipolar neurons were found, especially in vertebrates like mammals and humans. Nansen remained unconvinced and pointed to the nervous systems of invertebrates like lobsters which have nervous systems made up almost entirely of unipolar neurons. To Nansen, this was strong evidence that the reticular network theory couldn’t be correct and in an 1887 paper, Nansen made the statement–bold at the time–that “a direct combination between the ganglion cells is…not acceptable.”

Nansen had his own theory about how nerve cells might combine. He proposed that it was in the ‘dotted-substance’ (what modern neuroscience calls “neuropil” in invertebrates and “gray matter” in vertebrates) that nerve cells communicated with one another. Nansen went even further, prophetically declaring that this ‘dotted-substance’ was the “principle seat of nervous activity” and “the true seat of the psyche.”

In the concluding paragraph of his 1887 paper, Nansen insisted that the dotted-substance will no doubt prove to play an essential role in whatever the final function of the nerve cells is determined to be. Unable to resist making one last speculation, Nansen also wrote the following:

“It is not impossible that [ganglion cells] may be the seat of memory. A small part of each irritation producing a reflex action, may on its way through the dotted substance be absorbed by some branches of the nervous processes of the ganglion cells, and can possibly in one way or another be stored up in the latter.”

In this, Nansen was especially farsighted, touching upon what modern neuroscience calls “neuroplasticity,” currently one of the most promising explanations to account for how simple reflexes can undergo modifications that last for minutes at a time and how learning can lead to behavioral changes that can last for a lifetime.

In the spring of 1888, Nansen presented a shortened version of his paper for PhD consideration to a review board in the ceremonial auditorium of Christiania University. In what was described as a heated presentation, Nansen reiterated his firm belief that nerve cells were not fused into a reticular network, that they were instead independent cellular units. Nansen’s conclusions were met with hostility by the review board’s members and he was accused of jumping the gun and getting ahead of his evidence.

Nansen was awarded his degree in the end, but not before one panel member expressed his firm conviction that Nansen’s hypothesis was destined to be forgotten like so many others.

The experience was a taxing one for Nansen. “In the end, there was such a confusion of one thing on top of another…that I believe that had it continued any longer I would have had a nervous breakdown,” he later wrote to a friend. “There was hardly a second to spare; we finished precisely as calculated, but no more.”

In this, Nansen wasn’t exaggerating. He was running out of time. Nansen was scheduled to depart four days after his PhD defense on a cross-country trek across the unexplored interior of Greenland. A long-time dream was finally coming true.

Nansen personally saw to every aspect of the trip. In a plan that critics called dangerous and foolhardy, Nansen proposed to cross Greenland from east to west. It would be a one-way ticket for himself and his team, with no chance of turning back.

“In this way one would burn one’s boats behind one,” Nansen wrote. “There would be no need to urge one’s men on, as the east coast would attract no one back, while in front would like the colonies on the west coast with the allurements and amenities of civilization.”

It took nearly two months, but in the end Nansen proved his critics wrong and his company of six became the first Europeans to cross the frozen island’s expansive interior.

The Greenland expedition gave Nansen his first taste of international fame. It sealed his reputation as an explorer and ended his career as a zoologist. Wanderlust had found its perfect victim, and soon Nansen was making plans to embark on another first.

Nansen_ice_1For his next adventure, Nansen set his sights on becoming the first to circumnavigate the North Pole. In a highly criticized plan, Nansen proposed to freeze the in an ice flow and let it drift along a current that flowed from east to west across the Polar Sea.

But things didn’t turn out quite as Nansen had hoped, and the Fram did not drift close enough to the Pole. In a last ditch effort to salvage the mission, Nansen left the ship, determined to complete the journey on foot. He took with him only one other companion, Hjalmar Johansen, some sled dogs and enough supplies to last three months.

But the harsh conditions and uneven terrain proved to be more than the pair expected, and the two watched helplessly as their original three months stretched on for much longer.

“We built a stone hut, we shot bears and walrus, and for ten months we tasted nothing but bear meat,” Nansen wrote in his journal. “The hides of the walrus we used for the roof of our hut, and the blubber for fuel.”

In the end, a lack of supplies forced the two to turn back before they could reach the North Pole, but they held the record for Farthest North for five years until 1899.

The Fram voyage was Nansen’s last major expedition. As he grew older, Nansen became increasingly involved in politics, first becoming the Norwegian ambassador to London and then a high commissioner for the newly formed League of Nations. From 1919 until his death in 1930, Nansen was a devoted global humanitarian. In 1920, when nations were still trying to rebuild Europe after the devastation of World War I, Nansen was dispatched by the international organization to direct the repatriation of half a million prisoners of war who had not yet been exchanged. Afterwards, Nansen successfully raised funds for famine relief efforts in Russia on behalf of the Red Cross.

For his success in these two tasks, Nansen was awarded the Nobel Peace Prize in 1922. When presenting him with the award, the Chairman of the Nobel Committee had these words to say about Nansen: “Perhaps what has most impressed all of us is his ability to stake his life time and time again on a single idea, on one thought, and to inspire others to follow him.”

The reference was to Nansen’s humanitarian work, but the same sentiment could have just as easily been applied to his numerous other undertakings. Whether he was navigating uncharted landscapes of ice, introducing compassion to the realm of politics, or defending an unpopular view of the nervous system, Nansen readily staked his reputation and often his life on his beliefs. Any of these tasks could have easily occupied a person for a lifetime, but Nansen tackled each unknown with fresh enthusiasm, and was rewarded in many cases with success.

Poison in the Ink: The Makings of a Manifesto
Poison in the Ink: Visiting Trinity

Monday, September 26, 2005

Atelier: Hurricanes, Race, and Risk

Ray Nagin, mayor of New Orleans, has, as of yesterday, officially invited the residents of Algiers to return to their homes. Algiers, a neighborhood of 57,000 people, is situated on the other side of the Mississippi River, away from the main part of New Orleans; consequently, it was largely untouched by the massive flooding from Hurricanes Rita and Katrina, and, unlike much of the rest of the city, it has clean water and electricity. The Ninth Ward, on the other hand, perhaps the hardest hit of all the New Orleans neighborhoods, was the site of a second round of crumbling levees and massive flooding, this time courtesy of Hurricane Rita. The differences between these two neighborhoods – one predominantly white and middle class, the other impoverished and overwhelmingly black – are, of course, largely over-determined. It seems, however, that of all the differences between Algiers and Ninth Ward, the most nettlesome one continues to be the fact of their racial difference, a difference, for sure, that New Orleans, especially given its racially contentious history, is keenly aware of. It is this racial distinction, and the host of inequities that this distinction serves to cover up, that has been so ruthlessly exposed and indicted by the arrival of Hurricane Katrina.

What is unusual about New Orleans is that, historically speaking, racial segregation in U.S. cities has generally followed a purely horizontally- oriented spatial logic. Detroit provides perhaps the best example of this movement: generous FHA housing subsidies encouraged whites to migrate to the outlying suburbs, while those residents who remained in the inner city, an overwhelming number of whom were black, were left to grapple with the difficulties of an inner city that was increasingly dilapidated, de-industrialized, and under-serviced. The topography of New Orleans is a spatial manifestation of these same generous and highly racist Federal Housing Administration loans; we see a similar urban sprawl effected by the movements of whites out of the center of the city. The history of New Orleans, when seen through its longue durée, adds a particular Cajun piquancy to the normal ways in which space is meted out in relation to race. Of all the amenities available to white New Orleans, its most easily forgotten has always been its relative safety from the contingent forces of nature. While New Orleans had its last colossal flood in 1927, the Ninth Ward has suffered several smaller ones: the Industrial Canal, which cuts through the Ninth Ward, and which failed during Katrina’s onslaught, has failed three times before, once during Hurricane Flossy in 1956 and again during Hurricane Hilda in 1964 and Hurricane Betsy in 1965.

Because of its peculiar geography, New Orleans has always been a disaster waiting to happen; consequently, real estate values in New Orleans reflect and anticipate this impending danger: how badly a given flood or hurricane affects you is determined primarily by where in New Orleans you live. The Ninth Ward is both the poorest and the lowest part of New Orleans. This confluence of qualities particular to the Ninth Ward make sense once we examine those gross inequities that Hurricane Katrina served to expose. In New Orleans, differing abilities to insure against the contingencies of the future are not only bifurcated along an axis of race, but are also physically materialized within the built environment. The spatial features of New Orleans only truly make sense, however, if we take into consideration the ways in which race serves to cover over these economic and spatial inequalities.

In the context of the United States, the construction of race has historically manifested as a black and white binary; such a forced construction needs perpetual maintenance, of course: the case of Plessy v. Ferguson (1892) – brilliantly analyzed in Saidiya V. Hartman’s Scenes of Subjection– is only the most dramatic instantiation of a wide variety of legal and social mechanisms employed to maintain the fiction of racial difference. This fiction – to be cynical for a moment – continues to provide a necessary service: blackness has always functioned in this country, albeit in historically varied ways, as an alibi for those economic inequalities that are an inherent feature of capitalism.

Blackness serves to simultaneously elide the cause and corporeally represent the effect of capitalism’s inherent inequality. By reifing the equivalence between blackness and say, poverty, there is a retroactive, normative ‘logic’ that comes to the fore which precludes blackness from being seen as a bodily attribute that has been constructed to always already represent a category of people positioned as poor and unequal. This reified equivalence insists upon a direct causal relation that equates blackness in its ontological essence with the existential fact of being poor and unequal. Blackness, to function as it does, requires an illogical confluence between the realm of appearances and the realm of “essences”. To the extent that supposed cultural or economic inequalities come to be represented and representable to society by the appearance of black skin, the structural inequalities of our economic system are, to a certain extent, made to disappear.

As the flood waters recede, so too will the media coverage; perhaps it would do us all well to continue attending to a disaster whose aftermath has exposed more acutely and more incisively than perhaps any other event of the last decade the insidious function of race within the United States. Forced into the national spotlight by a gross and perhaps even criminally negligent mishandling of those worst-off residents of New Orleans, the vast majority of them both poor and black, race (accompanied as always by its steadfast companion, racism) has finally come clean: the sorry truth is that racial distinctions (and the inevitable hierarchy and exploitation that attends such distinctions) are as American as apple pie; race was violently stitched into our nation’s very fabric, right from its beginning. It is high time we looked carefully, without flinching, at that warped and misshapen pattern that our nation has woven.