Selected Minor Works: Oh. Canada.

Justin E. H. Smith

One often hears that Montreal is the New York of Canada. It seems to me one may just as well say that Iqaluit is the New York of Nunavut. Both analogies are true enough, insofar as each settlement in question is the undisputed cultural capital of its region. But analogies can often work simply in virtue of the similitude of the relation in each of the pairs, even when the two pairs are vastly different the one from the other. Montreal is the New York of Canada, to be sure. But Canada, well… Canada is the Canada of North America.

This will be the first of two articles in which I lay out a scurrilous and wholly unfounded diatribe against the place I now call home. The second part will consist in a screed against Canada as a whole; today I would like to direct my bile towards Montreal in particular.

Sometime in early 2002, there was an amusing article in the New York Times, chronicling the fates of a few New York families that had fled to re-settle with relatives in Canada for fear of further attacks. Within a few months, they were back. As I recall, one man was quoted as saying something like: I’d rather go up in fireball, I’d rather be vaporized, than live out the rest of my days up there.

New York pride is not only quantitative, yet it is interesting to note that there was more square footage in the World Trade Center than in all the highrises of Montreal combined. Still, in terms of square feet, if not of lives, September 11 scarcely made a dent in Manhattan.  It is of course not everywhere that the greatness of a city is measured by the number of skyscrapers it hosts. If this were the universal measure, Dallas would have London beat by a long-shot. But in Montreal the skyline is constantly pushed, on the ubiquitous postcards and tchotchkes sold along St. Catherine Street, as though this were some great accomplishment of human ingenuity, rather than a paltry imitation, a mere toy model, of the envied city to the south.

Les gratte-ciel are also celebrated shamelessly in Quebecois art and cinema. Take Denys Arcand, the tiresome and repetitive director of The Decline and Fall of the American Empire and its sequel The Barbarian Invasions, as well as of the slightly more compelling 1989 film, Jesus of Montreal. The way he cuts to new scenes with panoramic shots of the city’s skyscrapers at night, alto saxes blaring, you would think you were watching a promotional segment of the in-flight entertainment program on an incoming Air Canada plane. You would almost expect this schmaltzy segue to be followed by scenes of children getting their faces painted at a street fair, of horse carriages in the old town, or of a group of young adults, sweaters tied around their necks, laughing in a restaurant booth as a man in a chef’s hat serves them a flaming dessert. And yet this is not Air Canada filler, but the work of a supposedly serious director, himself only one example of a very common phenomenon in French Canadian movies. Every time I see the Montreal skyline glorified in Quebecois cinema, I think to myself: if Nebraska had a state-subsidized film industry, Omaha too would be portrayed as a metropolis.

But pay attention to the panorama, and you will see that there is simply not much there. Montreal is probably a notch closer to Iqaluit than it is to New York on the scale of the world’s great cities. I place it just behind Timisoara, and just ahead of Irkutsk, Windhoek, and Perth. It is admittedly not just an aluminum shed and a ski-doo or two. But still one gets the sense there that the entire settlement could be easily dismantled and quitted overnight, as one might pack up a polar research station. I’ve lived in Montreal for three years, and still, every time a Canadian commences another soporific paean to the place I think to myself: where is this city you keep mentioning? I must still be lost in the banlieue. I must not have discovered that dense and vital core of the place that would justify all this effusive praise. And so I consult the map repeatedly, and determine to my confusion that I have by now been just about everywhere in the city, indeed that I live in the centre-ville. In New York, in contrast, I always know, in the same way I know I exist, that I am most assuredly, metaphysically there. You cannot be in New York and doubt that you are in New York.

A student of mine recently returned from her first trip to New York and announced that it is ‘not all that different’ from Montreal. She noted that there is virtually the same concentration of hipsters in each place, and that many New York hipsters are listening to Montreal bands such as Les Georges Leningrad. Call it ‘the hipster index’. In Baltimore, Tucson, Cincinnati, and even Edmonton, there are plenty of ruddy youngsters who collect vinyl, make objets d’art with trash they find, do yoga, declare ‘I’m not religious per se, but I consider myself a very spiritual person,’ read Jung and Hesse and Leary and (‘just for fun’) their horoscopes, have spells of veganism, try to build theremins, decorate with Betty Page artifacts, and speak disdainfully of that empty abstraction, ‘Americans’. I’ve been to these places, and seen them with my own eyes. All these places rank very high on the hipster index. I’m afraid, though, that I am reaching a period of my life in which I measure the greatness of a place by other indices. Like beauty, for instance, and the intensity and importance of the things the grown-ups there are up to.

The other city often invoked in order that Montreal might borrow a bit of greatness is, of course, Paris. The city on the Seine, but without the jet-lag, is how the tourism industry packages it. I think this has something to do with the fact that a French of sorts is spoken in the province. But an English (of sorts) is spoken in Alabama, and nobody thinks to invoke London to try to get people to go there. It is odd, when you think about it, to make a claim to greater affinity with the Old World on the mere basis of la francophonie. After all, every major language of the New World –excluding those of the First Nations—is part of the European branch of the Indo-European family, but this doesn’t give Brazil, Panama, or the United States any special foothold in Europe.

I have been to Paris, and stood at intersections waiting to see pick-up trucks pass by with bumperstickers exclaiming the French equivalent of ‘This vehicle protected by Smith & Wesson,’ or ‘U toucha my truck, I breaka u face.’ They don’t have these there. They don’t have strip malls, or ‘new country’, or donuts, or (regrettably) coffee to go, and WWF wrestling has not made much of an impact.

The situation is quite different in Quebec. La belle province is 100% American, in the early-18th-century sense of the term, and Montreal is but an outlying provincial capital. The metropolitan capital to which Montreal is subordinated is New York. What counts as center and what as periphery does not, of course, stay the same forever. A few more decades of incompetent US government and global warming may change the balance between the two cities. For now, anyway, this is just how things are.

A very happy new year from 125th Street in Harlem. I will be returning to my usual, deracinated life up north a few days from now. If they’ll still let me in.

Dispatches: Divisions of Labor III

Strikes have engulfed New York City this winter. While members of the Transit Workers Union have gone back to work, NYU graduate assistants are preparing to resume picketing with the start of term on January 17th (usual disclaimer: me too). The situation is simultaneously encouraging and grim. Administrative threats of three semesters’ loss of work and pay have caused some attrition, but, impressively, have not broken the strike. By comparison, the 1995-6 Yale grade strike ended after threats of a similar variety – perhaps having already had union recognition and a contract has made the NYU graduate assistants more optimistic. Individual departments’ attempts to protect students from the severity of the administration’s punitive measures have mostly fallen short of extending any promises to those who continue picketing on the 17th. The climate, then, has become inhospitable to assistants who, for entirely legitimate reasons (among them, concerns over visa status, financial hardship, and impeded career advancement), no longer find enough certainty with respect to escaping potential reprisals. So far from signifying dissent from the union, however, these losses measure instead the level of vituperation with which the university sees fit to treat its members – the preservation of a ‘collegial’ relation to whom supposedly necessitates the union’s destruction. Here, rather than attempt an ethical adjudication (a perusal of the relevant documents will allow you to do that for yourself), I think it might be useful both to narrow and widen the usual perspective, which sees the university as the relevant object of focus, in order to consider some relevant internal differences as well as some external factors in this conflict. (For the basic dossier, see the Virtual Mind strike archive.)

To begin with, a narrower focus. Much discussion of late has had to do with the alleged concentration of strikers in the humanities and social sciences. Like many assertions in this debate, it usually remains unsubstantiated, circulating instead as a dark hint that the strike is the result of naive idealism. Consequently, NYU President John Sexton often describes graduate assistants in infantilizing terms,  reinforcing the idea that their grievances are an immature form of teenage rebellion. Furthermore, such infantilizing rhetoric carries with it the paternalistic notion that the university administration should be trusted to have its charges’ best interests at heart, even and especially when said charges are misbehaving. The longstanding association of the humanities with countercultural protest, amplified by the academic “culture wars,” in this case serves to delegitimize, and render strictly cultural, complaints of exploitation by graduate students. Strategically, then, this emphasis on the culture of protest over social analysis is a favored tactic of the administration and its supporters: as one anti-union philosophy professor put it on a weblog discussion of the strike, “if graduate students don’t want to be treated like spoiled children, they should stop behaving like spoiled children.” (Of course, the irony of this tautological ad hominem attack is that graduate assistants are attempting to dispute just this characterization of their position.)

Here I might return to the theme of “collegiality.” The picket line, with its chanting, drumming, singing – in short, its performativity – is by its nature often carnivalesque: not only the ordinary collegial etiquette, but the very habitus, or social and bodily disposition, of university life is suspended by it. The result is an unleashing of pent-up energies and frustrations of many kinds, including elements that exceed the basis of the conflict, such as the offensive nature of the university’s communications with graduate assistants. This is why the defense of collegiality has become an important high ground to the administration: harping on it allows the picket line’s symbolic excess to be depicted as a form of reactive immaturity. Paradoxically, immaturity is also seen to be a form of belatedness: Sexton’s euphemistic corporate terminology of an “Enterprise University” and “University Leadership Team” leaves no room such “dated” practices as strikes and protests, and the supposedly expired sixties radicalism from which they are thought to stem. Just as the domain of the humanities is linked to anachronistic countercultural protest, so then is the social practice of picketing. On both counts, we’re both too young and too old, past our sell-by date before we grow up. This argumentative tack, however, allows for the obfuscation of the original conflict. Even so, analyzed as a cultural form, the picket line performs an important function: it inscribes and instantiates the strike both to observers and in the minds and bodies of those striking. As Louis Althusser might have said, it “interpellates” (roughly, allows the self-recognition of) those who take part, and thus functions as a radicalizing action. Insofar as it refuses collegial dialogue and substitutes the implacable presence of the bodies of strikers, picketing only belongs more purely to the category of action.

Whatever the ideological hailing effects of picketing, if humanities students are strongly in support of striking, the true cause is not a nostalgic commitment to counterculture. The sociological facts on the ground, which are cleverly obscured by the strategy of infantilization, provide much more compelling justification. Unfortunately for the University Leadership Team’s propaganda efforts, graduate study these days tends to include discussion of the sociology of graduate education itself, which has become an important sub-field in literature departments. Doctoral students thus know all too well that fewer than half of them receive tenure track jobs within a year of receiving a diploma; that the number of non-tenured teachers continues to grow at a much faster rate than that of tenured faculty across the disciplines; that universities continue to rely on graduate and adjunct labor, while relatively fewer and fewer tenured professors enjoy the privilege of teaching only upper-level and graduate courses; that graduate assistants teach nearly all introductory courses in language and literature; and that collectivization is the rational response to the exploitation of a labor pool. These are not cultural differences between bohemian graduate students and technocratic administrators; they are social realities. And although these realities are not restricted to the language and literature programs – not at all – these departments have been affected very deeply by this macrocosmic shift in the structure of university teaching.

For this reason, which the “U.L.T.” knows as well as we do, a “New Policy” was announced in November by the university’s deans, which stipulates that graduate assistants’s normal teaching load of two stand-alone courses per semester will be reduced to one (this will primarily affect language and literature graduate assistants, as they teach most of the stand-alone courses). On the face of it, an early Christmas present, no doubt unrelated to the strike. In practice, however, it means three things: one, the university is suddenly authorizing itself to hire large numbers of new adjuncts to fill the newly vacated positions, in contradiction to its expressed aim of reducing the amount of contingent (adjunct) labor, without it looking like these are replacements for striking workers. Why, they’re simply being brought in to fill brand-new positions. The fact that these adjunct professors might conveniently be asked to substitute for striking workers is doubtless a coincidental side benefit. Second, it nourishes the university’s paternalist stance: reducing the teaching load strengthens their claim that graduate teaching is nothing more than apprenticeship or training, and that long-term shifts towards graduate and adjunct labor are being magically reversed. They really care! And third, most disturbingly, graduate assistants who choose to take on the heretofore normal load of two courses next semester can “bank” the extra course, and collect a free semester of funding in the fall. That’s right: teachers who strike this spring semester will lose their work and pay for the next three semesters, according to the Provost, whereas those who return to work and teach what until now was the standard two courses will receive a semester of free money. It might be supposed this will not foster a collegial atmosphere amongst teachers. Best of all, for the administration, this policy will primarily affect the language and literature programs, where students have a clear-eyed view of the labor issues involved because of their disciplinary location and thus strongly support the union. One is perversely impressed with shrewdness of this policy, although one is also sure that the law firm NYU employs to eradicate the union is more straightforwardly proud.

Finally, by way of briefly widening the focus beyond the institution of the university, let us consider NYU in a larger context. As this investigative piece in the Nation reveals, the MTA’s leadership has been engaged in a number of lucrative business dealings involving renting office space to its corporate sub-contractors. All this has been financed through public debt, and overseen by the presence on the MTA of the very people who stand to gain the most from such arrangements, but whose interest in public transportation is unclear. At NYU, the body with whom ultimate authority rests is the Board of Trustees (here is some background on its chair and vice-chairs). In an example of determination in the last instance by the economic sphere, to again allude to Louis Althusser, this board is populated by people with very different interests to those of university teachers. Comprised largely of financiers, corporate lawyers, real estate developers, and the leaders of media conglomerates, the board has shown very little interest in the sympathetic appeals of graduate assistants and our claim that the union palpably improved working and learning conditions at NYU. Of course, the commonly held conception of the university as the privileged space outside of the dominance of corporations in American society tends to disable the recognition that, in fact, universities reside within the sphere of economic determination, and are not necessarily any more amenable to arguments based on social justice than any other type of institution. The indifference of the board to the measurable benefits of unionized graduate assistants only reconfirms this. In fact, perhaps one can go so far as to postulate an inverse relation between the progressive prestige of a university and its hostility to a collectivized workforce: as evidence, one can adduce the immensely anti-union positions of the Ivy League schools. An ambitious school such as NYU is no doubt under immense pressure from the administrators of its more established siblings to resist precedent-setting unionization, and along the way absorb all the costs and bad publicity that accrue to union busting. Sadly, NYU seems more than happy to take one for the team it wishes to join, and thus to leave in place this inversion by which institutions who loudly condone progressive agendas in their publicity materials are the same ones who most viciously fight to prevent them from gaining any ground. A consolation: if we win, perhaps they will eventually realize that they have too.

Dispatches:
Divisions of Labor II ( NYU Strike)
Divisions of Labor (NYU Strike)
The Thing Itself (Coffee)
Local Catch (Fishes)
Where I’m Coming From (JFK)
Optimism of the Will (Edward Said)
Vince Vaughan…Eve Sedgwick (Homosocial Comedies)
The Other Sweet Science (Tennis)
Rain in November (Downtown for Democracy)
Disaster! (Movies)
On Ethnic Food and People of Color (Worcestershire Sauce)
Aesthetics of Impermanence (Street Art)

Monday Musing: ‘Tis the Season for Lists

For many years Abbas and I have spent the occasional evening composing lists of the greatest this, the smartest that, and the most overrated other. As you can imagine, it usually comes at some late moment when we’re tipsy. It’s a silly act of camaraderie which I would do with very, very few others. For me, it’s also a very private affair, which is precisely the opposite role that lists play in society.

I was reminded of it this holiday season, as I am on every other holiday season, because it is the season for collective judgment. Sometime between the beginning of November and the end of January, we are bombarded with lists, usually top 10 lists—and not just the best books, best fiction, best non-fiction, best movies, best albums, best songs, and their complement “worst’s”, but also worst disasters, worst web design mistakes, best and worst toys, and industry or sub-culture specific objects that are, so to speak, too numerous to list.

A list is different in kind and in effect from a simple “person of the year” or other declaration of a superlative. The latter sorts of things usually require some extensive justification of the judgment. If I were to say that Tony Judt’s Postwar was the best book that came out this year, you may reasonably ask why I thought so. And I would give a host of reasons to defend my claim. (In this instance, the claim is hypothetical.) But once I list runners up, I’m forced to answer different questions—why a work of history over fiction? why this prose style over that one?

This comparative quality of lists is the seductive virtue that turns the whole affair into a participatory event. (I was thinking about this when Abbas was soliciting top 10 books of the year from 3Quarks editors.) Relative judgments seem to engage us more than absolute ones. Say Hitler is a monster, you have no quarrel. Say Hitler is a worse monster than Stalin, and then you have a debate. Or if that’s too contentious, try: Franklin Roosevelt was a great wartime leader, against Franklin Roosevelt was a greater wartime leader than Churchill. This is not to say that the judgments of the former kind aren’t debated but that the latter elicits more responses and wider audiences. The Prospect/FP poll of the global public intellectuals did probably far more to create an audience for Oliver Kamm (with his neurotic Chomsky-phobic rant) than it did for Chomsky. Kamm was part of the debate; Chomsky was its object. And for the wider circles, Chomsky’s ordinal rank relative to Daniel Dennett, Richard Posner, or Slavo Zizek, is more contentious affair than whether he is well-known and well-respected public intellectual (at least in many circles).

This fun-silly exercise is not restricted to dilettantes such as yours truly. Sidney Morgenbesser once recounted a dinner with Isaiah Berlin spent classifying philosophers into gods, geniuses, brilliant men, smart guys, and some fourth category, whose title I don't recall. They got into a fight over where to place Leibniz, and wound up creating the category of demigods, which became populated solely by Leibniz. The story made me feel less silly.

Now with the audience that Amazon.com brings, these exercises grow more and more common, so much so that it calls itself listmania. (But some times I wonder whether this need to state our judgments even over matters of taste to wider and wider audiences doesn't make us kin to Judge Judy or the mobs found in Jerry Springer.)

Criticisms or reflective assessments of lists commonly begin with something like: “List say more about those that construct them than they do about . . .” the object, or the real world, or whatever else they’re supposed to tell us about. That of course is trivially true, in the sense that any made object tell us something, often a lot, about the maker. But it is true that lists generally deflect attention away from the criteria for judgment and, quite often, the judge. (“Judge, lest ye be judged,” Karl Kraus once said.) This is so even when the criteria for judgment are made fairly explicit.

Interesting lists offer us not so much new rankings but new dimensions for evaluation. The lists that fill much of the Pillow Book of Sei Shonagon, lady in waiting to the Empress Sadako (or Teishi) during the Hiean, are wonders. Each list evokes memories and sensations rather than judgments and thereby disagreements. Some of my favorite lists of Shonagon’s:

109. Things That Are Distant Though Near

Festivals celebrated near the Palace

Relations between brothers, sisters, and other members of a family who do not love each other.

The zigzag path leading up to the temple at Kurama

The last day of the Twelfth Month and the first of the First

And especially,

44. Things That Cannot Be Compared

Summer and winter. Night and day. Rain and sunshine. Youth and age. A person's laughter and his anger. Black and white. Love and hatred. The little indigo plant and the great philodendron. Rain and mist.

When one has stopped loving somebody, one feels that he has become someone else, even though he is still the same person.

In a garden full of evergreens the crows are all asleep. Then, towards the middle of the night, the crows in one of the trees suddenly wake up in a great flurry and start flapping about. Their unrest spreads to the other trees, and soon all the birds have been startled from their sleep and are cawing in alarm. How different from the same crows in daytime!

The lady Murasaki Shikibu, author of the Tale of Genji, one of the earliest novels ever written (circa 1000 A.D.) and contemporary of Shonagon, described her as “frivolous”, and concluded that “[h]er chief pleasure consists in shocking people, and, as each eccentricity becomes only too painfully familiar, she gets driven on to more and more outrageous methods of attracting notice.” But this is precisely the virtue of lists such as Shonagon’s; they get people to notice by pointing to new dimensions and new collections, and not simply to our judgment. If we can't be outrageous with the playful, where can we be?

The lists don’t have to consist of exotica. Nick Hornby did a remarkable job of using simple lists to construct a seductive story in High Fidelity. But when they do consist of exotica they really seduce, as in the case of many of Borges' stories. It’s probably a little late now, but for next season, I suggest new kinds of lists, ones that speak of our wit, creativity, and even whim.

Happy Monday and a Happy New Year.

Happy Newton’s Day!

Isaac20newtonDespite the fact that December 25th happens to be the birthday of a number of important historical figures (for example, Mohammad Ali Jinnah, the founder of Pakistan, which is where I am from), last year we at 3 Quarks Daily thought we would celebrate Newton’s birthday on this date. Unbeknownst to us, Richard Dawkins had just published an article suggesting the same thing. We were flattered. So here we are again, on Newton’s Day!

To commemorate this auspicious occasion, I thought I would try to deal with the apple today. You know what I am talking about: the apple that supposedly fell on Sir Isaac’s head while he was resting under a tree, and which jarred him into formulating the theory of gravity. The story is almost certainly apocryphal (no getting away from the Bible, is there?), but what could it mean? There are many ways to try and understand this story, but I just want to point out something simple but very cool: look at my drawing of me lobbing a ball over to a friend of mine below.

Newton_1

The ball follows a parabolic path from my hand to those of my friend. But what if my friend wasn’t there, and nor was the surface of the Earth? What if the ball could just pass through the Earth as if all its (the Earth’s) mass were concentrated at its center? What would happen then? Look at my next drawing below.

Newton_2

The ball would go in an elliptical path, with one of its foci being the center of the Earth! The parabola above the surface of the Earth is just one end of the bigger ellipse! Where had Newton heard of ellipses before? Yep, from Kepler, who had shown that planets travel in elliptical orbits around the Sun. How’s that for a connection between small objects falling on Earth, and the heavenly spheres? Of course, we’ll never know Newton’s real line of reasoning, but here’s a possible one:

  • apple falls on Sir Isaac’s head
  • he starts to think how freely falling objects behave
  • he generalizes to objects following parabolic paths
  • he imagines what happens if the surface of the Earth doesn’t stop the object
  • he realizes the object falls into elliptical path
  • he realizes planets are just “falling” around Sun

Okay, it probably wasn’t that way, but I still think it’s a nice thought. And in case you’re wondering just how big Newton’s intellectual reputation is, check this out from the London Times:

Newton trounces Einstein in vote on their relative merits

His most famous equation, E=mc², is 100 years old, and 2005 has been named Einstein Year in his honour, but Albert Einstein has been trounced in a scientific beauty contest held to celebrate his own greatest achievements.

The most famous head of hair in science was soundly beaten by Sir Isaac Newton yesterday in a poll on the relative merits of their breakthroughs, with both scientists and the public favouring the Englishman by a surprisingly wide margin.

Asked by the Royal Society to decide which of the two made the more important contributions to science, 61.8 per cent of the public favoured the claims of the 17th-century scientist who developed calculus and the theory of gravity.

More here.  And, oh, what the… 

MERRY CHRISTMAS!

[This post dedicated to LWP.]

Dispatches: The Thing Itself, or the Sociology of Coffee

In the movie “My Dinner With Andre,” a touchstone for the antic film buff, Wally Shawn muses about the things that make life bearable despite the heavy weight of human suffering and existential dread that torment his friend Andre Gregory. “I just don’t know how anybody could enjoy anything,” he says, “more than I enjoy… you know, getting up in the morning and having the cup of cold coffee that’s been waiting for me all night, that’s still there for me to drink in the morning, and no cockroach or fly has died in it overnight – I’m just so thrilled when I get up, and I see that coffee there, just the way I want it, I just can’t imagine enjoying something else any more than that.”

This little reverie has always struck me as a note-perfect piece of writing (or speaking) by Shawn, who here depends on a long-running association of coffee with a form of escape from the prosaic, even as it fuels that most prosaic form of labor, writing. The social meaning of coffee combines its conception as the fuel upon which workers of all kinds rely with the notion of the coffee break, the oasis in the day in which workers are temporarily freed from adherence to their routinized schedules and can indulge in idleness. The twin sites of coffee drinking, the coffee shop and the cafe, represent the two class locations in which these escapes can occur: the coffee shop for laborers and the cafe for the intellectual, who turns her own idle philosophizing into her special form of production.

The association of coffee with both labor and the emancipation from labor is a long one. In the standard narrative of the Enlightenment, coffee shops in London in the late seventeenth and early eighteenth centuries play a large role as sites that hosted the workingmen’s collectives and other forms of nascent intelligentsia. Jurgen Habermas, for example, famously identified the London coffee houses as the birthplace of the modern critique of aristocratic power in the name of liberty in his influential The Structural Transformation of the Public Sphere. Habermas’ claim that the public sphere expanded and developed into an inclusive site in which middle-class interests could be voiced also gestures at the interesting social connotations of coffee drinking: a practice that bridges the public world of letters with the private world of internal reflection, which duality that remains in effect to the present time. Coffee is the special beverage of intellectual labor and mental stimulation, and along with other products of the tropical colonial world, such as tea, sugar, and spices, perhaps accrued its social meaning precisely because of its novelty and the absence of pre-existing traditional associations with its consumption (as would be the case in Europe with beer, for example).

Until 1690 or so, nearly all the coffee imported to Europe came from Yemen, after which time the West Indies began to dominate, due to large plantations established by European colonialists, until roughly 1830. London at this time was the major trading center for the world’s coffee supply, supplanted by Rotterdam and coffee from Java later in the nineteenth century, and in the twentieth by Brazil’s production and New York’s factorage, or management of trade. What is interesting about this extremely truncated potted history is how little known it is, beyond the vaguest associations with these locations and coffee drinking: Java, for instance, or Colombia more recently, being places generally associated with the commodity. The fact that coffee as a crop is extremely amenable to the large-scale plantation system had much to do with its spread around the world, and also with the inculcation of the desire to drink it. Coffee as a commodity has also been extremely important to the development of the global economy, perhaps second only to oil. In between the world wars, coffee surpluses in Brazil grew so large that enough beans to supply the world for two and a half years were destroyed, prompting the development of international agreements to govern the flow of trade and prevent the destructive influence on prices of huge surpluses.

As you will have guessed, what I’m interested in here is the caesura between the social meanings of coffee and its consumption, on the one hand, and the economic and historical conditions in which it is produced, on the other. Note that our airy and metaphysical associations of coffee with scribal labor, and our notion of cart coffee as the fuel for wage workers, show no trace of the globally instituted plantation system of production and distribution that allows for its availability. Now, a sociologist friend of mine, upon hearing my thoughts on this subject, remarked dryly: “Well, yes, standard Marxism, the commodity always conceals the conditions of its production.” Which is true, yes, my friend, but I think there’s a bit more to it than that, when we come to our present age of late capitalism (to adopt the favored descriptor). For we specialize in nothing so much as the inflection of meanings in order to create and reinforce markets for products: the process called branding. Coffee has presented an interesting problem for marketers because it suffered from the problem of inelastic demand.

What this bit of jargon means is simply that coffee drinking was typically habitual and not generally considered to be divisible into gradations of luxury. In other words, people do not make fine distinctions when it comes to coffee – and indeed, the world’s coffee market is dominated by one varietal, arabica (though robusta is often used in cheaper brands as a blending ingredient – in fact, the whole question of why arabica is considered superior to robusta is of interest, though not sufficient relevance here). Or at least, coffee was considered to be this type of commodity through the nineteen-eighties. At that point, a revolution occurred with the application of European connoisseurship to coffee-drinking. I am referring, of course, to the vogue for Italian coffee that swept the world at this time. Finally, with the nomenclature of espresso, macchiato, cappuccino, etcetera, marketers had an opportunity to make gradations, to identify a style of coffee drinking with sophistication and taxonomies of taste in such a way as to basically invent a whole lifestyle involving coffee preferences, and thereby to supplant the inelasticity of demand that was preventing consumers from changing their buying patterns.

Ironically, of course, most of these gradations have little to do with the coffee itself; rather, they involve the milk, whether to steam it or froth it, add it or not, or whether to adulterate the espresso with hot water (americano), and so on. The effect is the same: making coffee drinking into a form of connoisseurship. My sociologist friend and I recently walked by a Starbucks, the apotheosis, of course, of the current technocratic style of coffee drinking. Outside was a chalkboard, the faux-handwritten message on which inspired these reflections: “The best, richest things in life cannot be seen or bought… but felt in the heart. Let the smooth and rich taste of eggnog latte fulfill your expectations.” Depressingly contradictory, the message also advertises a beverage which may or may not include coffee itself, though the misuse of the Italian word for milk, in the world of Starbucks, usually signifies its presence. Yet there’s also something honest about it, in that it baldly announces the contradictions that the drinking of coffee embodies. Seen to be the escape from the prosaic, the fuel for the laborer, the joiner of public and private, the psychoactive stimulant that incites philosophy, coffee, so far from a purely metaphysical vapor, contains all the strange compressed complexity of the world of real objects and the webs of relations that bring them to our lips.

Dispatches:

Divisions of Labor II
Divisions of Labor I
Local Catch
Where I’m Coming From
Optimism of the Will
Vince Vaughan…Eve Sedgwick
The Other Sweet Science
Rain in November
Disaster!
On Ethnic Food and People of Color
Aesthetics of Impermanence

Federico Fellini: Circus Maximus

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

The Circus Maximus was the arena for mass entertainment instituted by the Etruscan kings and then enhanced by subsequent emperors for tens of thousands of spectators. Trajan, Julius Caesar and Augustus, among others, added to, and enhanced, this structure. Its chariot races were a particular feature of its activities in later times.

FelliniIn the theatre of his mind, the great Italian auteur, Federico Fellini, casts forth his films as entertainment, serious entertainment which is worthy of the greatest art that the twentieth century produced. However, it is no good coming to Fellini looking for Thomas Mann or Persona. Fellini is just not that kind of artist. He puts his trust in his feelings, and he believes that feeling is the way to discover the reality of the world. He doesn’t believe in intellectualising about life or art, or in theorising about art either. But to say Fellini is not intelligent in his films would be wrong. Fellini is supremely intelligent as film director. He shapes his films as carefully as any novelist or poet does in the silence of their rooms. Circus Maximus translates from the Latin as largest circle, and it is this largest circle which Fellini draws around the world, enclosing in its phantasmagoric visions the poetry and pain of a loving heart. He invites his audience to participate in his films as would the audience at the Circus Maximus for some games spectacular. You may sit around the edge of the circle and enjoy the surreal passing parade, smell the sawdust, see the most startling use of colour, and of black and white. If he allowed himself to be styled emperor in his domain, Cinecittà, he always did so with a light touch, and he could be scathing about his own persona—he virtually accuses himself of fraudulence in Otto e mezzo. Of course Fellini was no fraudster but a subtle artist of the most unusual kind. The caricaturist from Rimini went on to become a true maestro.

Fellini’s films are musical, and the word maestro is not inappropriate to use in association with his work. His orchestra is his production team—and what a singular group of artists he gathered together for his purposes. It is doubtful that films like Fellini’s could ever be made without this kind of team to work with. Underwriting the whole endeavour is the music of Nino Rota who provides such an insouciant soundtrack to Fellini’s visual panoramas, by turns tender, melancholic, wistful, or vital and exuberant, music for eating, laughter, dancing and loving. But Fellini knows when to keep the soundtrack silent too. Usually, somewhere in a Fellini film, there is that sudden silence followed by the sound of wind, premonition of an ending Fellini doesn’t try to understand. He simply accepts death as part of the spectacle we must all participate in.

It is to be regretted that the main way people now come to Fellini is through re-release on DVD, or on television. If ever a director needed the big screen it is Fellini who designed his films as a medium in which there is a participatory audience. I remember two experiences in my early years of picture going. The first time I saw Otto e mezzo it was a revelation to me, and I also found it profoundly moving. And I almost hurt myself laughing, along with the rest of the audience, when I saw the family argument around the table in Amarcord. I doubt I would have had these reactions if my first viewing of these films had been via the television set. Fellini embraces you through the screen. If you can’t participate in the manner of an Italian feast, you won’t get the best out of his films. These are not works of art for people who want to sit at a distance in judgement. They are meant for enjoyment, involvement. His camera is lascivious, and it gets very close to its subject matter, which some people find disturbing. And people who think Pulp Fiction instituted some new kind of film narrative need to have another look at Fellini’s work, especially from Otto e mezzo onwards, just as cliched ideas about Fellini’s sexism ignore a lifelong preoccupation with the facade of Italian machismo.

For some, Fellini’s films can be a stretch, they ‘don’t wear well’, his sensibility, with its strangely compatible dual carriageway of sensuality and moral prodding, being at odds with present conformities. Satyricon especially is difficult to get hold of. On one hand it is a spectacle which Fellini fills with characteristic striking visuals. On the other, it comes across as cold, as if one was visiting a moonscape. Fellini called it science fiction of the past. Perhaps it was his comment on what he saw going on about him in supposedly liberated times. True, you don’t want to sit down to a tranche of his films in one go. His work is intense, baroque. There is maximum sensory overload. He is like Emily Dickinson and Bruckner in that way; you can’t take on too much of their intensity at one time. It would be a mistake to try to because then the appetite sickens. These are artists who are for a lifetime. You can always come back to them and their depth and seriousness will always be there for you when you have need of it. The fact that Fellini insists on joy being part of his sensibility makes him the major artist he is—he refuses to degrade himself in the manner of so many European intellectuals and artists who mortify themselves with doubt, self-hate and cynicism. It is not as if Fellini avoids tragedy. Who could forget Zampano’s despair on the beach at the end of La Strada or Marcello’s horror at Steiner’s suicide and the murder of his children in La Dolce Vita. Fellini is the realist who accepts suffering but who nevertheless insists on the pleasure principle too. One of the things Fellini takes most pleasure in is the human face. For him, it is endlessly fascinating. Fellini does not, contrary to a lot of Fellini criticism, put freaks in his films, but the variousness and beauty of the human face and form. In that sense he is a portrait painter, filling the screen with characters that give witness to the strangeness and majesty of the human: the alluring image of Anita Ekberg standing in the silent Trevi Fountain, the fantastical ecclesiastical fashion parade in Roma, the out-of-touch aesthetes on board the Gloria N. in E la nave va.

Was ever a director luckier than to have Giulietta Masina as a life companion? How one marvels at this actor’s performances in Fellini’s films. I am especially fond of her work in Giulietta degli spiriti. Here was companionship that led to beauty and greatness. But all Fellini’s actors seem to belong to a troupe. The circus master may crack the whip, but what performances he gets from his casts. How vital his characters seem with their dreams and delusions, their grandeur and pettiness, their gross appetites, their inwardness and hopefulness.

Opera, theatre, cabaret, vaudeville, circus. Luminous and celebratory, fantastical but all too real. Cinema. Art. Fellini is all of these things. For me, his films are inimitable, poetic, unforgettable.

— *** —

The following poem was written in late October 1993 when the press reported Fellini’s stroke.

               Intervista
        Federico Fellini 1920–1993

Maestro, lover, dreaming poet,
Must we say farewell just now?
Here on this uncertain street
Of a tawdry century
You encompassed multitudes,
From the fountain’s quietude
To a seaside ecstasy.

Trumpets at the darkened gates?
But how are we, who trust you still,
Beyond the failings that we share,
To live without your gaiety,
Except that in film flickering
And in Giulietta’s eyes
We know your passion will be strong.

Soon to sawdust you must drop
And circus clowns will hang their heads,
But while you live the world seems good,
For a pure heart brings such grace
And mischief that shows kindliness.
Stay to see our wretchedness
You maker of the marvellous!

But stillness is approaching now,
Tender, as this last spool spins
To silence in unending night.
Ciao, dear artist. May you slip
Quickly to that other side
Behind the screen, and leave us with a smile
Whose joy is deep, whose laughter was so wise.

Intervista: Fellini’s penultimate film
Giulietta’s eyes: Giulietta Masina, Fellin’s wife

Written 1993

monday musing: las vegas

You forget how much the sky can be different until you come out West again. There is a simple explanation. It has to do with flatness, it has to do with vistas. Maybe the idea of the West as simple and honest comes partly from that. You can see where the clouds are and where they’ve been and where they are going.

One of the myths that gets worked up into a reality in America is the one about the past and history and identity. It is a rejection of the ancient idea that character is fate. It is the idea that one can be anything one wants. This is largely a lie of course, but so what. There is no going back. Las Vegas is a place where it is pretty clear that no one is going back to anything. But at the same time, it is trying to have its cake and eat it too. There’s no past in Vegas but even so the place crazily attempts to fit the entirety of human history onto a few miles of the Strip, from the ancient pyramids to New York City. Pretty amazing.

What I’m trying to say is that Las Vegas makes no sense but it explodes into a phantasm every night anyway and then dies into the sunlight. It doesn’t even really exist during the daytime. There is something a little sad about a place that is such a spectacle at night and so invisible during the day. It is an American sadness somehow, a Jay Gatsby sadness. And it is a strong sadness, or there’s a strength in it. There’s a depth there all of a sudden just when everything seemed pure surface.

Las Vegas evaporates into the sand, into bits of road that give up so suddenly it can make you laugh. All at once you’re at the end of the world and there are just the dusty mountains beyond that make some final limit to how far the housing developments can creep. The desert said, “you can have this bowl in which to fester and glow.” At the center of it all stands the Strip, generating the outward push.

The great casinos of the last fifteen years have developed according to one overriding impulse. That impulse has been to capture the imaginative spaces of the world for commerce. Of course, the real world spaces, Paris, Venice, New York, etc., are centers of commerce in themselves already. But what is immediately striking about Paris or Venice, Las Vegas is how each place has been contained, distilled, and represented as a manageable version of the original purged of everything but its symbolism and imagery reconstituted into gaming areas and shopping corridors. The thirst for civilizational spaces seems to be the primary driving force in the most recent incarnations of the Strip. The indeterminate fantasy zones, the Palms, Sands, Sahara, Flamingo, and even the more recent Treasure Island have receded into the background or disappeared altogether. The fun of those older fantasylands was the fun of pure play. They were escapes beyond the boundaries of any possible world. The newer spaces are interesting in that they are playing with the real world. Vegas has the audacity to recreate and thus lay some claim to the actual world, to other locations with which it shares space and time. And yet it is still under the guise of a giant wink and nudge, a knowing smile. But the sense of funny is muted. It’s muted because of the sadness, the sadness that is, one must suppose, at the center of gambling and its infinite monotonous temporality. What a stunning audacity to claim the entirety of world culture for your own and then admit that you don’t care, that it is as boring and empty as the mechanism of a slot machine – a machine that, in the end, merely produces randomness. And it might be an honest admission too. Las Vegas never seems to be having the fun that it claims to be having, is always more aware of itself than it would like to be. It’s more exhausted than exhilarating. At least, this is how one can start to like it. There is something honest about it even as it is playing all these games and pretending to be so many things at once.

At New York, New York, Las Vegas they have built a rollercoaster that weaves in and out of the compacted skyline of New York City. It is fast and scary. You get a glimpse of the back side of the Statue of Liberty before you tumble down again in another loop or corkscrew. I love it. They should put one in the real New York. You could see people screaming in a loop around the Citicorp building or spiraling down the side of the Empire State. The theorists of simulacrum would send up a cry: one more defeat for the really real! But they would be wrong. There is no lack of reality on the Las Vegas Strip. It is just one more version of things. From the top of one the big drops at the New York rollercoaster you can see the mountains outside of Vegas for a moment before you plunge. If it is twilight you will see a very special desert light. It is soft and clear at the same time, gentle and brittle simultaneously. And then you speed down into Brooklyn while the whole car train screams and laughs in delight and the mountains are gone in the distance again. Las Vegas.

Poison In The Ink: About This Year’s Tribute in Light

Tri200508_2 The 88 spotlights had been turned on since late afternoon, but it was only as dusk fell that their beams became visible. Wan and informal at first, their lights grew steadily brighter as the evening grew darker. The spotlights were aimed skywards and set up inside a fenced-in lot in lower Manhattan’s Battery Park City, divided up between two raised platforms and arranged into two squares. They remained lit from dusk on September 11 until dawn the next day, their beams combining to form two phantom pillars of pale blue—ethereal echoes of the World Trade Center towers that once stood nearby and a tribute in light to the victims who died when those towers fell.

Seen from a distance, the columns appeared to sparkle as light was being reflected off of small objects moving swiftly in the beams. The effect was beautiful but unintentional. The tribute designers never planned for it and no such sparkling was seen the first few times the memorial was lit.

When the effect began appearing in 2004, people didn’t quite know what to make of it. Some thought that dust or ash had inadvertently become illuminated, like floating dust in a shaft of light. Others thought that maybe fireworks had been set off and that distance was muting the sounds of their explosions.

Rebekah Creshkoff was in Battery Park that evening, close enough to the tribute to know that none of these guesses were correct. Gazing up, she didn’t see dust or ash or fireworks, but wings. Thousands of wings. Wings from birds and bats and moths that were flying in and circling the beams. Flapping and flitting and fluttering, the wings reflected and scattered and dispersed the tribute’s light, making the pale spectral beams sparkle.

The first thirty feet above the beams were packed with moths that had become drawn to the lights. On the platforms where the spotlights were arranged, Creshkoff saw technicians wearing dark sunglasses walk from one to another and using cloth rags to wipe off the blackened husks of those that had strayed too close to the 7,000-watt lamps.

Creshkoff is the founder of New York City Audubon’s Project Safe Flight, an organization that has been monitoring bird causalities in the city since 1997. During the spring and fall when birds migrate, volunteers patrol the perimeter of skyscrapers in Manhattan’s midtown district, collecting and cataloguing dead and injured birds, casualties of either mid-air collisions—with the sides of glass buildings or with each other—or from exhaustion after circling repeatedly around especially bright lights.

Gazing up at the Tribute, Creshkoff worried that a similar end might await some of the birds wheeling overhead.

“Sometimes they would break free so you could see them exiting the light and then they’d come back into the shaft of light,” Creshkoff remembered. “They would do that repeatedly—break free from one beam and get sucked into another one.”

Creshkoff remembers looking up at the birds and feeling sick. “It was really distressing to someone who has been following this problem like I have,” she said.

Creshkoff didn’t linger long at the tribute. After about half an hour, unable to bear the birds in distress any longer, she headed home.

Creshkoff had witnessed the same thing happen last year. Shortly after that first experience, she came up with an idea that she thought might help the birds. She proposed that the city end the memorial at midnight—before bird migration traffic reached its peak—and having the 88 spotlights be turn off one at a time, rather than all of them simultaneously at dawn. According to the plan, the lights would begin dimming around 10:30 pm and then gradually fade out until the last light was turned off at midnight. The plan seemed like a win-win situation: the birds would be spared a major distraction during their migrations and the city would have a beautiful and fitting end to its memorial.

When Creshkoff met New York City’s Municipal Art Society to discuss the proposal, however, she was kindly told that the plan would never work. The problem, of course, was money—not that there wasn’t enough, but that too much had been invested.

“I thought it was a wonderful idea but a lot of people contributed a lot of money to buy these lights,” Creshkoff said. “They get to use them only once a year. They’re not going to want to see them go out early.”

**********

Birdsinlight_1 While Creshkoff was watching the birds from the ground that night, Andrew Farnsworth, a graduate student at Cornell University’s Ecology and Evolutionary Biology Department, was observing the same spectacle from atop a nearby thirty-eight-story building.

Farnsworth had been hired by Audubon to monitor bird activity around the tribute while it was lit. Concerned about the effects the lights might have on migrating birds, Audubon had made a deal with the city: if observers like Farnsworth witnessed at least 5 birds being killed or injured as a result of the lights, the city would shut down the tribute for 20 minutes to allow the birds time to disperse.

An avid birder since he was four-and-a-half years old, Farnsworth has spent many nights awake just watching and listening to migrating birds’ nevertheless, the tribute stakeout was a unique experience for him.

Farnsworth was accompanied by his fiancé, and the two of them camped out on the roof of the building from before sunset on Sept. 11 to sunrise the next day.

“We spent the whole night on top of the building,” Farnsworth said. “Our monitoring consisted of 3-5 minute periods watching the birds in the beam, followed by the same time period watching the general procession of migration in and around the beams.”

Over the course of the night, Farnsworth recorded the presence of a wide variety of birds.

“Small songbirds were by far the most common in the beam, especially warblers,” Farnsworth remembered.

That night, Farnsworth also recorded the presence of 7 veerys, 6 yellow-billed cuckoos, 4 scarlet tanagers, 4 rose-breasted grosbeaks, 3 gray catbirds, 3 indigo buntings, 2 wood thrushes, 1 sora, and 1 American robin; there were also herring gulls, sparrows, northern waterthrushes, yellowthroats and redstarts.

Whenever possible, Farnsworth identified the birds based on their size and plumage color through the use of binoculars and a spotting scope. But sometimes the swirl of birds became too chaotic and visual identification became impossible. When that happened, the birds were identified based on the distinctive sounds each species made. Using a shotgun microphone and a digital audio recorder, Farnsworth recorded the birds’ calls for later comparison against a computer database of bird calls.

The birds started to arrive at about 8 pm, approximately half an hour after the first moths and bats had begun appearing, Farnsworth said. The birds flew at a height of about 300 feet from the ground and those that strayed into the beams remained anywhere between 2-15 minutes at a time.

Bird traffic around the beams was light at first but gradually became heavier as the evening progressed. The peak occurred at about 2:30 am when there were about 50-60 birds in the beams at once and many more circling nearby. By 4 am, the number of birds had dwindled and the last bird flew through the beams at about 6 am.

Scientists don’t really know why some species of migrating birds are attracted to bright, artificial lights or why they become reluctant to leave once they reach the lights’ source.

“Birds are attracted to the light much the same way that moths, insects, and many other organisms are attracted to light,” Farnsworth said. “They’re attracted to lights mostly under very specific conditions, usually associated with poor visibility, low cloud ceiling, fog and haze.”

It’s thought that under such conditions, the birds have to rely heavily on visual cues to guide them. Many bird species are known to navigate by moonlight or starlight, and some scientists think the artificial lights may be acting as a distraction.

“The theory is that once birds focus on such a strong visual cue, they move toward it and once they get to it they do not want to leave it,” Farnsworth said. “Some anthropomorphize this as fear to leave, but it’s probably closer to simply being overwhelmed by the strength of the cue.”

In the end, Farnsworth and the other observers didn’t witness any bird casualties that could be attributable to the lights and Audubon’s emergency plan was never enacted.

**********

Creshkoff fears that in a few years migrating birds will face an artificial light disturbance that makes the Tribute lights seem pale in comparison.

The Freedom Tower, the building designed to replace the twin World Trade Center towers that were destroyed, is expected to be completed in 2010. Rising 1776 feet into the air, the new tower is expected to be the tallest building in the United States. At night, a 400-foot spire atop the tower will emit an intense beam of light that will penetrate more than a thousand feet into the air above the tower.

“I just shudder to think what impact that will have on migrating birds,” Creshkoff said.

Creshkoff believes people need to become more aware of the impact of artificial lights.

“Light is beautiful; people like it and get aesthetic jollies from seeing the New York skyline lit up,” Creshkoff said. “[But] we need to educate ourselves and realize that light is not benign, light is not a non-event, light does not have zero-impact either on ourselves or other animals.”

Monday Musing: Richard Dawkins, Relativism and Truth

[Please see NOTE at end of this post.*]

Binoculars_3Richard Dawkins has been an intellectual hero of mine since college, where I first read The Selfish Gene. Though I thought I understood the theory of evolution before I read that book, reading it was such a revelation (not to mention sheer enjoyment) that afterward I marveled at the poverty of my own previous understanding. In that (his first) book, Dawkins’s main and brilliant innovation is to look at various biological phenomena from the perspective of a gene, rather than that of the individual who possesses that gene, or the species to which that individual belongs, or some other entity. This seemingly simple perspectival shift turns out to have extraordinary explanatory power, and actually solves many biological puzzles. The delightful pleasure of the book lies in Dawkins’s bringing together his confident command of evolutionary theory with concrete examples drawn from his astoundingly wide knowledge of zoology. Who doesn’t enjoy being told stories about animals? If you haven’t read The Selfish Gene, do yourself a favor: click here to buy it, and read it over the holidays.

I have read all his subsequent books, and Dawkins has only gotten better. Last year around this time, in a roundup of the best books of 2004 here at 3QD, I wrote this about The Ancestor’s Tale:

This is Dawkins’s best book in years, and he has never written less than a brilliant book. The literary conceit which lends the book its title is, of course, that of Chaucer’s Canterbury Tales. Dawkins’s tale is that of all of life. Starting in the present he travels back in time to meet the common ancestor of humans and chimpanzees, then further back to meet other ancestors connecting us to other life forms, and so on, until we are at the origin of life itself. At close to 700 dense pages, the book is filled with a massive amount of biological information. The sweep of Dawkins’s erudition is truly astounding, and if you find yourself getting exhausted at times by the relentless and seemingly endless litany of facts, keep going: at some point toward the end, I had the supremely ecstatic experience of being absolutely awed at the majestic grandeur, variety, and tenacity of the whole history of life, as well as at the prodigious effort that has gone into classifying and understanding it.

Professor Dawkins has also been kind to me personally. Upon hearing of my father’s death earlier this year, he sent a warm note of condolence along with a beautiful passage about death from one of his books, and he has been appreciative of our efforts at 3 Quarks Daily, as you may have noticed if you have ever clicked on our “About Us” page.

I have never had anything much of interest to say about Richard Dawkins’s writings because I agree with 99% of what he says. He has also inspired feelings of gratitude and loyalty in me, so I am loath to disagree with him. But (you knew there was going to be a but, didn’t you?) there is that 1%, and the twin dictates of intellectual honesty and deep respect for Professor Dawkins compel me to say something about that today. I am only able to muster the requisite temerity because our small disagreement is about a subject which I probably have spent much more time studying than he: the philosophical matter of truth. Because I do not have space here to write a lengthy disquisition on truth, my treatment of it here will necessarily be somewhat cursory, but I am hoping that it will be enough to show that Dawkins’s concept of truth is overly simple.

In the past century, scientists seem to have become increasingly hostile to philosophy. Einstein was representative of a dying breed of physicist with a philosophical bent (see this). By the second half of the twentieth century scientists were frequently openly contemptuous toward philosophers. For example, in his popular books, the famous physicist Richard P. Feynman often expresses an impatient disdain for the whole subject: “Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong.” The highjacking of philosophy by literary theory that took place in the 1980s and 1990s in the American academy and its subsequent conflation with cultural studies, minority studies, and other disciplines, mostly indulged by English departments across the country, with all its attendant (and now notorious) obscurantism and lack of rigor, certainly did not help matters. It was only a matter of time before an Alan Sokal would appear to burst that bloated bubble, and he did. Meanwhile, philosophy departments continued their more sober reflections, but science’s attention had by now been focused on the regrettable abuses of science by a handful of postmodernist thinkers. (Where were these welcome objections to nonsense in the heyday of Freudian psychoanalysis, by the way?) What has resulted is a widespread tendency on the parts of scientists to not only dismiss philosophy, but to do it in a facile manner, more often than not using the straw man of relativism. And Richard Dawkins has also fallen into this tempting trap.

The second piece in Dawkins’s collection of essays entitled A Devil’s Chaplain is “What is True?” and it begins this way:

A little learning is a dangerous thing. This has never struck me as a particularly profound or wise remark, but it comes into it’s own in the special case where the little learning is in philosophy (as it often is). A scientist who has the temerity to utter the t-word (‘true’) is likely to encounter a form of philosophical heckling which goes something like this:

There is no absolute truth. You are committing an act of personal faith when you claim that the scientific method, including mathematics and logic, is the privileged road to truth. Other cultures might believe that truth is to be found in a rabbit’s entrails, or the ravings of a prophet up a pole. It is only your personal faith in science that leads you to favor your brand of truth.

The straw man is being set up here by Dawkins, so that it can be knocked down rather easily later. He cannot possibly expect us to believe that if he were to utter the words “It’s true that whales are mammals,” to his friend Daniel Dennett, say, that Dennett would then respond with the sort of reply that Dawkins has put into the mouth of his imaginary philosopher above. Nor would any other respectable philosopher. The words “true” and “truth” are used in many contexts in English, most of them ordinary everyday usages of the “Is it true that it’s raining outside?” variety, where no normal person will respond with “What is truth?” or some other bizarreness. It is only in highly technical and subtle issues which surround the philosophical notion of truth, such as attempts to pin down what entities the predicate “true” applies to (it doesn’t apply to words, but can apply to sentences, for example), and what it means for something to be “true” in a very general way which would cover all of its usages, etc., that the philosopher might object. Just as “energy” or “work” are technical words in physics, “true” is a technical word in philosophy (as well as in mathematical logic). And just as no normal physicist is going to heckle or object to someone saying, “I did a lot of work today carrying furniture down to the street from my fifth floor apartment,” (no work was done in technical physics terms by carrying things down) no philosopher will object to common uses of “true” or “truth”.

Similarly, I am not sure what Dawkins imagines a relativist to be, but according to his description above, if I say to the relativist, “Snow is green,” the relativist will be happy to accept my statement to be just as true as “Snow is white.” In fact, no normal English-speaking person will agree with me, or agree that my statement is true. To argue that philosophers are naive or wrong is one thing; to imagine that they are insane is quite another. What such a person would probably think and say is that perhaps I don’t know English well, and I am referring to something other than snow with the word “snow”, or that perhaps I don’t know what “green” really means, or even that perhaps I don’t know what “is” is.

There is an important principle in philosophy that any disagreement must take place against a background of much greater agreement. Before we can argue about whether “whales are mammals”, we must at least agree on what “whales” are and what “mammals” are. If I believe that mammals are animals with legs, that walk on land, and always must be so, then we are presumably not even arguing about the same thing.

This brings me to the gist of the matter. What Dawkins is really defending is a particular view of truth: what in philosophy is called the correspondence theory of truth. By contrast to his own fictional philosopher, he is saying that “there is an absolute truth”. In this view of truth, words refer to a reality external to the mind (for example, “hydrogen” refers to a substance consisting of atoms made up of one proton with one electron in orbit around it–ignoring the heavier isotopes of hydrogen), and sentences either capture that reality accurately (correspond to it), in which case they are true; or they don’t, in which case they are false. At first blush this may seem commonsensical and unproblematic, but this is not so. Let me attempt to give a flavor of the difficulties with a quick example: suppose that in the fifth century B.C., Socrates one day came home and said to his wife, “I saw a falling star on my way home,” and also suppose that I came home one night and said to my wife, “I saw a falling star on my way home.” Suppose Socrates and I mean the same thing by each of the words in the sentence. I think it is fairly clear that both our wives would instantly know what we were talking about, and perhaps visualize something similar in their minds’ eye to what we had just seen. But to Socrates, it was literally a falling star, while I know that stars don’t fall, and that it was most likely a meteor being incinerated in the atmosphere, and I was deliberately referring to a meteor as a “falling star.” Now according to me, Socrates’s sentence to his wife was a falsehood, while I told the truth. Both women understood the same thing. Both had no reason to suspect that their husband was telling a falsehood, yet from Dawkins’s point of view, what Socrates said was a falsehood (though he did not know it). And some future scientist may realize that meteors are actually something else, and on that day, suddenly, unbeknownst to me or my wife, my sentence will also become a falsehood. Is this not an odd notion of truth? There are many other problems with this sort of “absolute truth” view, but I must move on.

The other major theory of truth in philosophy is known as the coherence theory. This is a holistic view in which the meanings of words depend on the meanings of other words and so on. There is a “web” or “network” of interdependent meanings, with words at the periphery being pinned down by ostention. Words (or sounds) are initially associated with certain salient aspects of the environment by repetition. If every time we are in the presence of any rabbit, I say “rabbit”, you will eventually understand that the sound “rabbit” refers to a rabbit. I may, by pointing and so on, also define the word “ear”. Now if I say that a “tibbar” is a rabbit that does not have any ears, you will know what I mean. The meaning of “tibbar” then is given by the meanings of the words in terms of which you have understood it, not some external reality of tibbars. Truth in this view, is a predicate which applies to beliefs which cohere with other true beliefs. By this kind of holistic thinking, we get rid of the strangeness we encountered in the last paragraph where the same sentence is judged true when I speak it, but false when Socrates spoke it. Now, when Socrates says, “I saw a falling star on my way home,” all that is required to make this true is that it cohere with his other beliefs. This sort of view gets rid of many of the difficulties of a correspondence theory of truth, but sometimes at the cost of giving up on a certain notion of a fixed and absolute reality.

This may sound a bit odd at first, but is a defensible theory and many (possibly even most) respectable philosophers, including Daniel Dennett, now hold it. What I am trying to say is that it is not automatically wrong and silly to say that “there is no absolute truth.” There are reasonable ways of thinking in which truth is (in a very technical sense) not absolute, but dependent on our web of shared meanings, and our other beliefs. There is no need for these philosophical ideas to do violence to any of our common everyday usages of truth, such as “It is true that Plasmodium causes malaria,” any more than our understanding through atomic physics that solid matter is mostly empty space should prevent us from saying, “It is true that the box is completely filled with iron.”

There is, of course, a lot more than what I have hinted at to all this, but it strikes me as unfair of Dawkins to imply that all philosophers who do not believe in “absolute truth” are being ludicrous relativists. Dawkins has rightly and often urged us to give up on the comforting notions of religious superstition. Why then must we cling to the scientifically comforting notion that there is an absolute truth out there waiting for us to discover it, rather than the idea that truth (in the limited sense I have described above) has to do with who we are and what we make it? Many philosophers know a great deal of science and mathematics. My own advisor at Columbia, David Albert, has a Ph.D. in physics and publishes in quantum theory as well as philosophy. Hilary Putnam is a mathematician as well as a philosopher and holds joint appointments at Harvard. Dennett and Paul and Patricia Churchland know as much neuroscience as some neuroscientists. I could go on and on. But few scientists take more than a superficial interest in philosophy anymore, and it is their loss. Dawkins is right when he quotes Pope in the first line of “What is True?” above. I can only very respectfully recommend to him and to you to drink deep at the philosophical spring.

[*NOTE: It has become clear to me after looking at some of the responses to this column (here and at other sites) that I almost surely misinterpreted Richard Dawkins’s meaning in the passage that I quoted from his article “What is True?” While I had originally thought that he is attacking philosophers in general, it is now fairly clear to me that he was there only attacking lunatics of the sort that would object to the use of the word “true” in a statement such as “It is true that snow is white.” This does nothing to change my assertion that what Dawkins is really doing in his article is defending a correspondence theory of truth, nor does it in any other significant way change the main thrust of my essay.]

Have a good week!

My other recent Monday Musings:
Reexamining Religion
Posthumously Arrested for Assaulting Myself
Be the New Kinsey
General Relativity, Very Plainly
Regarding Regret
Three Dreams, Three Athletes
Rocket Man
Francis Crick’s Beautiful Mistake
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President

Dispatches: Divisions of Labor II

The strike of graduate students at NYU continues.  The single demand of the Graduate Student Organizing Committee remains unmet: that NYU negotiate with their union, as they did since 2001.   (Once again, discloure: I am a member of GSOC, and striking.)  Currently the single most important labor struggle in the nation involving university teaching, the issue has attracted great amounts of coverage lately, and I believe that those interested in the current state of U.S. universities would do well to pay attention. 

In a previous piece, I summarized the trajectory of disagreement that brought graduate students to the decision to strike.  Here, I’d like to provide an account of the situation at the arrival of a critical juncture.  Tomorrow marks NYU President John Sexton’s ‘deadline’ by which graduate assistants must return to work.  As a carrot, Sexton offers those who return non-union contracts guaranteeing the continuation of the gains and benefits that, ironically, were previously procured by the union (yearly salary increases, health coverage, etc.).

But the sticks are many.  By email, Sexton threatened students who choose not to scab tomorrow with the removal of both their ‘stipends’ (pay) and their spring ‘teaching eligibility’ (jobs)–the disaggregation of the two things being a rhetorical strategy meant to preserve the fiction that the stipends do not represent payment for teaching labor, despite the fact that they are disbursed to graduate teachers in the form of paychecks with taxes and social security withheld.  Of course, despite the fictive bureaucratese, firing workers for striking is illegal and generally considered a vile form of strike-breaking.  In practice it puts NYU’s graduate students in the position of almost all strikers – i.e. without pay. 

Still, it is unclear whether such a threat can be enforced, as many departments have enacted resolutions not to replace each other’s labor, leaving a very real question as to where hundreds of qualified scabs can be found.  In addition, Sexton’s email holds that students who return must pledge not to resume striking their labor, or risk losing an additional year of teaching.  Here again the response of the academic community has been one of outrage: compelling students to sign away their right to protest does not exactly smack of the vaunted ‘academic freedom’ that universities claim to defend. 

In strategic terms, NYU’s actions over the month of the strike have further inflamed many graduate students, and this current ultimatum exacerbates the trend.  No one likes to be intimidated.  As well, it has provoked faculty outrage, not least because the threats erode the faculty’s traditional autonomy when it comes to teaching assignments as well as choosing to censure particular students.  Many chairs and directors of graduate studies are simply refusing to comply with the order from the provost to reassign spring teaching in accordance with the threats.  And the labor movement in New York city has been engaged by the struggle, with Manhattan’s president saying that NYU’s actions have embarrassed the borough, and several City Council members proclaiming that NYU will receive no cooperation on land-use review until they recognize the union.  Although faculty and tertiary support are invaluable, the fate of the union will be determined by the size of the disruption caused by strikers, from which all other support will flow.  However, even were the strike to be ended without a contract, the fairly frightening glimpse into the workings of high-level university administration will have been instructive and induced radicalism in many.

To step back, the philosophical question here is quite a simple one: are graduate students workers, and thus deserve the right to unionize?  Where political discourse is concerned, the Clinton-era National Labor Relations Board held that they were, while the George W. Bush-era board did not.  No surprise there, and the decision is not binding.  But let me offer a counterexample to the view that graduate students are not workers: the fact is, they already are classed as workers at many universities, including all the SUNY schools as well as Rutgers.  The only difference is that these universities are public.  Is there, then, any significance to the distinction between public and private-university graduate students?

I don’t believe that a distinction germane to this issue can be made.  Certainly the argument that unions erode collegiality and interfere with internal academic affairs can be dispelled by a glance at Rutgers, where graduate students have been unionized since 1972 without incident.  It is also very difficult to deny that working conditions at NYU have improved since unionization.  In 2000, students in the English department were paid 12,000 dollars for teaching four classes or discussion sections, with no health benefits.  Today, compensation for the same workload is 19,000 dollars plus health coverage.  Better working conditions make for better teaching; thus the undergraduates are better served by the union as well.  Either we should have a union, or Rutgers shouldn’t.  You make the call.  If you make the one I think you will, come picket with us this morning.

Dispatches:

Divisions of Labor I
Local Catch
Where I’m Coming From
Optimism of the Will
Vince Vaughan…Eve Sedgwick
The Other Sweet Science
Rain in November
Disaster!
On Ethnic Food and People of Color
Aesthetics of Impermanence

Rx: SPICING CANCER TREATMENT

The population of India is now over a billion with an estimated 1.5 million cases of cancer diagnosed per year. The population of the United States is 295 million, and yet 1.5 million cancers will be diagnosed this year. The accompanying Table shows that the incidence of breast cancer in the US is 660/million while in India, it is 79/million. Similarly, in the US, prostate cancer accounts for 690 cases/million, while in India, it is 20/million. What is more surprising is that in a country where large numbers of people smoke, and where pollution control is not as good as that in the developed countries yet, the incidence of lung cancer in India is 30/million compared to 660/million in the US. Only Head and Neck, and endometrial cancer rates are comparable between the two countries, the former probably related to the chewing of pan and betel nut in India.

COMPARISON OF CANCER INCIDENCE IN USA AND INDIA:

Screenhunter_001_1

The cause of this discrepancy has been debated, and is felt to be multi-factorial including the genetic predisposition of the subjects, their life-styles, a uniqueness of the geography or the environment or any combinations of the above. The weakest factor in this list of possibilities is that of genetic predisposition. While some diseases are clearly more common in or restricted to certain races ( for example, Jews of Eastern European, or Ashkenazi descent carry the Tay-Sachs gene at a rate ten times that of other Americans), cancer incidence is associated with individual or familial pre-disposition rather than racial predisposition. It has also been suggested that there may be serious under-reporting of cancer cases from the third world countries, making the statistics unreliable. Frequently, patients from the remote and rural parts of the country either do not seek treatment at all, or succumb to the disease before a diagnosis is made, many more preferring homeopathic and Aryuvedic remedies. However, differences are also apparent between Asians and Americans living in the US as shown in the example of cancer statistics for males in Massachusetts:

Screenhunter_008

Screenhunter_007

Similarly, the incidence of cancer differs among the female population:

Screenhunter_009

Screenhunter_010

It is conceivable that cancers unique to an older age group such as prostate or certain hematologic malignancies may not be as common in India because the percentage of the aged population is comparatively lower, but this does not explain the well documented difference in the incidence of childhood leukemias.

Two thirds of all cancers are related to diet. Associations between the two are difficult, if not impossible, to prove because of the formidable number of variables involved. Problems with the American diet are being increasingly appreciated because of the epidemic of obesity. Meats and poultry obtained from animals that have been fattened up on hormones or chemically preserved foods may be factors that contribute to the early onset of puberty in girls, and increasing incidence of chronic diseases like diabetes and cancer. However, another possibility is that Americans not only consume (in large amounts) what is damaging, they also do not eat what could potentially neutralize and protect them against the carcinogenic effects of the former. That protective effect for Indians may be provided by their diet which is rich in spices. Garlic, onion, soy, turmeric, ginger, tomatoes, green tea and chillies that are the staples of Indian cooking have been shown to be associated with a lower risk of a variety of cancers ranging from colon, GI tract, breast, leukemias and lymphomas.

The benefits of spices have been known for millennia. As Alexander the Great was returning home after conquering the known world, the last such place being India, he fell ill and unexpectedly died in Babylon. The University of Maryland researchers have now successfully proved that he died of typhoid. Upon his death, a fight broke out between the Macedonians who wanted to take their native son home for burial and Ptolemy, Alexander’s powerful General, who was heading the conquered Egyptian territory and who wanted to bury him in Egypt. It took almost a year to build a chariot suitable enough for transporting the body out of Babylon, and during this time, Alexander’s body had to be preserved. Interestingly, even though the secrets of mummification were now known to the Greeks because of the conquest of Egypt, the body was actually preserved in spices, white pepper and honey.

Sir Thomas More was beheaded by the order of King Henry VIII and his head was cooked in water before being impaled on a spike and displayed on London Bridge where it stayed for a month, taken down only as more heads began to arrive, eventually being returned to his daughter. Margaret More kept the head with the greatest reverence as long as she lived, carefully preserving it by means of spices. To this day, it stays in the custody of one of his relatives. Since 1500s, the vault containing the head was last opened in 1837, and it was still in reasonably good shape.

Spices have been used for ages as food preservatives. Mothers knew millennia ago that meat spoils quickly in hot climates, and their children died if they ate left-over food. Being a rich source of protein for their children, meat was a precious commodity, especially in hunting gathering days and needed to be preserved. Mothers learnt through experience that adding spices could accomplish this goal. Geographically speaking, the number of spices in food has been shown to be directly proportional to how hot the weather is. In contrast, food is either chemically preserved or frozen in the Western countries. Spices kill germs, and are therefore highly effective as preservatives.

The precise mechanism by which spices prevent the development of cancer is not well understood. Spices are some of the best natural anti-oxidants, and may be acting by protecting the cells from DNA damage. There is a documented association between germs and cancer; estimates are that ~15% of cancers globally are caused by micro-organisms. It is possible that many cancers are initiated by pathogens and spices prevent this from happening by killing off the germs. More importantly, natural substances like onion, garlic, ginger, turmeric, red chilly, tomatoes, and black pepper have now been scientifically proven to interfere with the very intracellular signaling which accounts for the excessive proliferation and loss of maturation in cancer cells. The bio-chemical properties of these substances are being widely investigated now, with over 1000 papers published in highly respected medical journals on curcumin and ginger in the last few years alone. In summary then, spices may act to prevent the various stages of cancer initiation and development through a combination of their anti-oxidant, anti-pathogen and anti-proliferative properties.

Plants of the ginger (Zingiber officinale Roscoe, Zingiberaceae) family, one of the most heavily consumed dietary substances in the world, have been shown to inhibit tumor promotion in mouse skin. The substance called [6]-gingerol is the main active compound in ginger root and the one that gives ginger its distinctive flavor. A review of recently published studies indicate that among a host of other activities, gingerol induces apoptosis (cell death) in leukemia cells, can prevent the development of colon cancer cells, protects against radiation induced lethality and acts as a blood thinner via platelet activation inhibition (similar to an aspirin-like effect).

Curcuma longa or turmeric, responsible for the yellow color of curry powder, is a herb belonging to the ginger family and curcumin is its most active component. Turmeric has been widely used in India for centuries as a panacea for a variety of ailments. In summary, curcumin has been found to interfere with key cellular signaling pathways to arrest the unchecked proliferation of cancer cells, induce apoptosis, sensitize them to radiation therapy, and stop the formation of new blood vessels, a mechanism by which cancer cells are known to spread. These are the very effects desired to achieve growth arrest and eventual regression in a malignant process.

At least 9 clinical studies with curcumin have now been reported in humans in diseases ranging from cancer to rheumatism, uveitis, inflammatory diseases, leukoplakia, metaplasia of the stomach, and as cholesterol-lowering agents. All studies show that curcumin is extremely well tolerated in doses ranging from 4-8 grams/day, although up to 12 Gm/day have also been administered. Clinical responses of varying degrees have been reported in almost all of these clinical trials. Similarly, gingerol has been widely used for its biologic and chemopreventive effects for centuries, with more controlled clinical trials in recent years.

While spices may prevent cancer initiation and expansion, could they also be of therapeutic benefit in already established tumors, especially if given in very high doses? The intuitive answer is that the earlier the treatment is instituted in the course of the disease, the higher the probability. Two obvious possibilities are the pre-malignant conditions marked by abnormal morphology called dysplasias, or established malignancies such as low grade lymphomas and chronic leukemias where the course is so slow that a watch-and-wait policy is usually practiced. Over the ensuing years, the diseases change character, becoming progressively more lethal, at which time intervention is undertaken with aggressive and toxic approaches like radiation and chemotherapies. A good place to start may be the use of these natural substances in such conditions, especially in the earliest stages of disease evolution. The benefit from natural substances is likely to take time, a luxury which cannot be afforded in the case of rapidly growing malignant diseases, therefore the sooner this intervention occurs, the better.

We were curing malaria long before we knew what caused it. It was an empirical observation that victims of malaria who chewed on the bark of the Cinchona tree improved dramatically which led to the extraction and isolation of quinine. If we wait for a complete understanding of every abnormal signal and molecule in a cancer cell, then effective therapies may be a long way off. On the other hand, taking an observation such as the role of diet in preventing cancer can help develop some novel therapies as well as define preventive measures. As evidenced by the campaign against smoking, it will take a long time to bring about the social change required in making major lifestyle adjustments such as alterations in diet. While such changes are essential in the long run, it may be advisable to combine the best of what the East and West offer by using the natural substances to treat earlier stages of cancers and use the latest DNA microarray technologies and the results of the Human Genome Project to understand the precise mechanism of action of these spices.

Studying age old Eastern remedies, or “complimentary” medicine runs the risk of being branded as voodoo. Upon hearing of my current interest in treating cancers with Masala after so many years in cancer research using the most sophisticated scientific tools, a beloved family member in Pakistan remarked in wonder, “But I thought you went to America to become a rich doctor, not a witch doctor!”

Previous Rx Column:
The War on Cancer

Monday Musings: Exporting Institutions, Comparison, and the Iraq debate

Not too long ago I was struck by how most debates about Iraq that I come across are about exactly what left-liberal hawks (such as Paul Berman, Norman Geras, Christopher Hitchens, Michael Ignatieff, and Bernard Kouchner) say the war was about: the democratization of Iraq. By this, I don’t mean that critics of the war think that it was fought to democratize the country; in fact, many critics of the war, including myself, are skeptical about this motive, given that it has been fought and its aftermath has been directed by a cohort that was remarkably close to many of the thugs of the wars in Central America and Africa in the 1970s and 1980s. But rather the question of whether the war and, more relevantly, the presence of American and coalition troops in Iraq is worth it pivots around what we think the chances of Iraq becoming a democracy happen to be. If you see democracy in Iraq as a likely outcome, or even more likely with American troops than without, then chances are you oppose the withdrawal of American troops.

What I’ve been struck by in these debates in the last few years is the way we argue about how possible democracy in Iraq happens to be; that is, how we assess the chances. These debates revolve around the causal power of institutions. By “institution,” I mean the standard social science understanding of the term: the formal and informal rules and operating practices which structure the interactions between people in a society, an economy or a polity. The institutions that interest debaters here are those that structure and give rise to democracy. The questions they ask are primarily about whether or not institutions that work in current democracies can be exported to Iraq? I don’t want to answer that question, but I do want to look at how people go about trying to answer it.

The issue of whether democracy can be export is itself now at the core of the debate. For example, Barry Posen, a smart observer of international politics, offers this argument in support of a pull-out:

Iraq is a society divided into three groups with strong identities, and ethnically and religiously fissured societies are not easy to democratize. Minorities fear the tyranny of the majority, and majorities have a hard time avoiding the temptation to tyrannize. To the extent that the Bush administration’s ideal political outcome in Iraq can be discerned, it is a stable, pluralist, democratic, unitary state with strong constitutional protections for minority rights that the minorities are willing to rely upon. This goal is implausible, though, because the U.S. government cannot erase Iraqi history, and it cannot undo the political power of sheer numbers. In Iraqi history, to be disarmed is to be violated. In a democracy riven by strong group loyalties, to be outnumbered is to be vulnerable. Sunnis and Kurds won’t live without their own armies. Shiites won’t share political and military power with Sunnis who have been cunning and ruthless enough to rule as an armed minority in the past.

The remark is not quite in passing but it does make a deep assumption about the relationship between institutions and the conditions in which they operate. Specifically, it suggests that institutions are hard things to export largely because they don’t matter so much for outcomes as the conditions which would bring the institutions about. What really matters is the interests and values of people on the ground, and how money and guns are distributed among them. “The best institutions are written on the hearts of men,” and all.

Few people believe that a set of institutions can be put down anywhere and have the same affects in all the places we find it. And at the same time, most of us believe that institution make us act in ways that we would not have in their absence, which is why we try to reform them in the face of stiff opposition. In the debates we hear and have about Iraq—in the media, at lectures, in bars and at parties—we’d be better served with a better understanding about not only institutions but also how we implicitly evaluate whether an institution can succeed.

Here, we’re speaking of a very peculiar set of institutions: the one that make up a democracy. It’s peculiar because under it, the losers of an election accept the outcomes and go home and don’t take to the hills with geuillas or the barracks where they gather the troops to march into Capitol Square. There is certainly no shortage of instances in which losers of an election stormed the palais, or dissolved the constituent assembly. There’s also no shortage of instances in which winners decided they didn’t want to take the chance of losing the next time and ended the system. As a matter of simple fact,

‘One cannot stop a coup d’état by an article in the constitution’, any article in the constitution, Guillermo O’Donnell once remarked . . .

O’Donnell’s comment was mentioned in a talk that Adam Przeworksi, one of the smarter comparativists in political science, gave year and a half ago, entitled “Institutions Matter?”. The talk was about whether institutions really matter and how we can know and, ultimately, whether a science of comparative politics is possible. (It certainly worth a read, and it is an important question for the general public since so many of our political debates draw lessons from elsewhere: the well-functioning of the French health system, the alleged malfunctioning of British national health care.) The importance of the issue isn’t purely methodological.

[T]he issue has practical, policy, consequences. Particularly now, when the United States government is engaged in wholesale institutional engineering in far away lands, skepticism and prudence are in order. The policy question is about whether one can stick any institutions into some particular conditions and expect that they would function in the same way as they have functioned elsewhere. Note that when the US occupying forces departed from West Germany and Japan they left behind them institutions that took root; that were gradually adjusted to local conditions; and continued to organize the political lives of these countries. When the US occupying forces left Haiti in 1934, they also left as their legacy a democratic constitution, authored by the assistant secretary of the navy, who was none other than Franklin D. Roosevelt. Yet this constitution did not prevent President Vincent from becoming an absolute despot one year later. Why, then, did similar institutions succeed under some but not under other conditions?

We don’t know, at least not yet. It is for this reason that debates that rely on what worked in Germany, or what happened in Bosnia, don’t at all convince the side that the examples are aimed at.

We are generally bad at thinking about how institutions win assent, and they must. Can support for democracy be measured by opinion polls in which people are asked whether they support a democracy? Or does its stability depend on other factors—support for liberalism, tolerance, the rights of women? (In this instance, the latter are conditions in which the institutions of democracy operate.) Simply, institutions reflect to a large degree the relations of power found in a society, and they must do so to be self-sustaining.

These conclusions may be too pessimistic, though I agree with them in the context of the debate on Iraq (about which I sincerely hope I am very wrong and the optimists very right given what’s at stake). Large scale social engineering tasks are generally heroic, with the largest ones associated with the largest disasters. But a richer understanding of institutions in the past few decades have helped us to understand the extent to which some of what we had written off as conditions can be understood as institutions and therefore changeable.

I was thinking of Przeworski’s talk while reading our friend Alex Cooley’s new book Logics of Hierarchy: The Organization of Empires, States and Military Occupations. (I recommend it to those interested in these questions of how reform unfolds.) Alex’s claim is that looking at how political hierarchies are organized, specifically, whether they are territorially or functionally structured, will help tell us much about the way reform projects unfold. (His look at the occupation of Iraq focuses on the organization of the United States reconstruction effort—how the reliance of various private and opportunistic companies for the provision of services to Iraq placed vital components of the project outside any meaningful system of accountability.)

I was thinking of Alex’s book because its lessons, while drawn out of comparison, are not about outcomes per se, but rather about what to look for. And if there is anything to a “science” of comparative politics, it is in finding what to look for in these large social processes and changes where society’s history offers less information than we need. A useful comparative politics helps to structure ways of identifying what about Germany allowed defeat and occupation to lead to democracy, and what about Haiti did not? Comparativists themselves have known this for a long time. A popular version of this sensibility can perhaps help the Iraq debate.

Monday Musing: Sugimoto

051201l_full_1

There has long been a battle between time and history. Simply put, time likes to obliterate and history likes to stick around. In the long run, time always wins. But in the short run, history has been known to score a few points, though often by being so brutal and absurd that some have wondered whether time’s efficient destruction wouldn’t have been the better option. Such is life. Caught between our own human dumbness and the empty monotony of a mute cosmos we tinker away, bags of mostly water.

In this great, if pointless, battle, the art of photography has always been an ambivalent weapon. Does photography work in history’s camp, laboring away to freeze time, capture moments, and preserve something in memory, which is history’s greatest ally? Or is photography the killer of meaning, a cudgel in the arsenal of time, a momento mori that reminds us of the ruin eating away at the core of history.

This ambivalence goes back to the dawn of the medium. Some of the earliest photographic portraits feel like tiny triumphs of history. Not only have they preserved a moment of personhood, showing us individual human subjects, they’ve also managed to capture a small portion of the context and environment in which that person achieved whatever personhood they did. They preserve a little chunk of world, and world is meaning. But then there is the work of photographers like Atget and Marville. In these photographs, the world is just barely a world, the empty streets of Paris have been reduced to landscapes under the order of nature. They can’t last. They’re already ruins. They already betray the signs of decay, which is the fate of all things.

Hiroshi Sugimoto has long thought of himself as a photographer. These days he’s branched out into making all number of things and an impressive array of them can currently be viewed at the Japan Society. There are dioramas and ‘fossils’ and sculptures and copies of masks, textiles, household objects, religious icons and various other testimonials to human civilization. They are all rendered with the smoothness, precision, and calm detachment that has characterized Sugimoto’s photographic work. But they are photographic in a much deeper way than that. He’s still working on the knife’s edge where history and time come together.

The show is called, aptly, The History of History. And it is difficult to figure out which side Sugimoto is on. His concern for history sometimes feels like a tribute to its struggle against time. But, then again, the mood of Sugimoto’s inquiry suggests that he is an observer from outside, peering at history from the vantage point of eternity. How else could he dare contend that he is producing a history of history. Such is the stuff of gods or extraterrestrials or brains in a vat. Indeed, a person could be forgiven for thinking that Sugimoto is one of Epicurus’ gods, surveying the course of history from the intermundus, the space between worlds, with a sublime indifference.

In an interview with art critic Martin Herbert, Sugimoto said:

The first portfolio of seascapes I published was entitled ‘Time Exposed’ because time is revealed in the sea. When I began thinking about the seascapes I was thinking, what would be the most unchanged scene on the surface of the earth? Ever since the first men and cultures appeared, they have been facing seas and scenes of nature. The landscape has changed over thousands, millions of years, man has cultivated the ground, built cultures and cities, skyscrapers. The seascapes, I thought, must be the least changed scene, the oldest vision that we can share with ancient peoples. The sea may be polluted, but it looks approximately the same. So that’s a very heavy time concept. People have a lot of strange ideas about my seascapes – they think these photographs were done using very long exposures, but they are in fact very fast because I wanted to stop the motion of the waves, which are constantly moving.

And it is these same seascapes that dominate the installations in the History of History exhibit. By dominate I mean that they are the subtext for everything else. They loom. Never more so than in the tiny sculpture Time’s Arrow. A reliquary that could fit in the palm of your hand, Time’s Arrow looks like the kind of thing that might frame an old snapshot of your grandmother. Instead it is a seascape. That damn seascape. Nothing has been more ominous since the obelisk of 2001: A Space Odyssey. But the obelisk and the seascape are completely different in at least one important way. The obelisk seemed to purport some unknown design, a determinate, if hidden, purpose at the horizon of the cosmos. The seascape is purposeless, plain and simple. Time’s Arrow indeed. What a cruel joke, Mr. Sugimoto. There’s no arrow in that arrow, no direction. All that time promises is simply more of itself. More time. More horizons looking infinitely forward and infinitely backward. You may, Sugimoto seems to suggest, choose to fill up that empty expanse with history, probably there’s nothing better to do, but time will always have an answer. The seascape cannot be erased.

It reminds me of the old Soviet joke:

On the occasion of an anniversary of the October Revolution, Furmanov gives a political lecture to the rank and file: “…And now we are on our glorious way to the shining horizons of Communism!” / “How did it go?”, Chapayev asks Petka afterwards. “Exalting!… But unclear. What the hell is a horizon?” / “See Petka, it is a line you may see far away in the steppe when the weather is good. And it’s a tricky one — no matter how long you ride towards it, you’ll never reach it, you’ll only wear down your horse.”

In Sugimoto’s vision, the struggle of history against time is a ridiculous one. The pathetic labors of civilization are a bemusing spectacle. Behind it all, always, the horizon, the empty expanse of the sea, the changeless obliterations of time.

And yet … there is something tender about Sugimoto’s reproductions of history and the spectacle of culture. He’s doing work on history, almost fetishizing its objects. At the same time, the limitless horizon of the seascape. The two don’t resolve themselves in Sugimoto’s work. They’re just what is.

Monday Musing: Reexamining Religion

Pervez Hoodbhoy is a well-known physicist who teaches at the Quaid-e-Azam University in Islamabad, Pakistan. He is also well-known for his frequent and intelligent interventions in politics. In an article entitled Miracles, Wars, and Politics he writes:

PervezOn the morning of the first Gulf War (1991), having just heard the news of the US attack on Baghdad, I walked into my office in the physics department in a state of numbness and depression. Mass death and devastation would surely follow. I was dismayed, but not surprised, to discover my PhD student, a militant activist of the Jamaat-i-Islami’s student wing in Islamabad, in a state of euphoria. Islam’s victory, he said, is inevitable because God is on our side and the Americans cannot survive without alcohol and women. He reasoned that neither would be available in Iraq, and happily concluded that the Americans were doomed. Then he reverentially closed his eyes and thrice repeated “Inshallah” (if Allah so wills).

The utter annihilation of Saddam Hussein’s army by the Americans which soon followed, did little, of course, to attenuate this student’s convictions. (Also, it is mildly interesting that Muslim conceptions of heaven focus so much on precisely the easy availability of alcohol and women.) Constantly confronted by such attitudes, atheists such as myself are often driven to hair-pulling exasperation by the seeming irrationality of religious belief, and specifically its immunity to refutation by experience, logic, argument, or it seems, anything else. Professor Hoodbhoy goes on to note that:

In Pakistan today – where the bulk of the population has been through the Islamized education initiated by General Zia-ul-Haq in the 1980’s – supernatural intervention is widely held responsible for natural calamities and diseases, car accidents and plane crashes, acquiring or losing personal wealth, success or failure in examinations, or determining matters of love and matrimony. In Pakistan no aircraft – whether of Pakistan International Airlines or a private carrier registered in Pakistan – can take off until appropriate prayers are recited. Wars certainly cannot be won without Allah’s help, but He has also been given the task of winning cricket matches for Pakistan.

Dawkins_7 And this state of affairs by no means obtains only in Islamic societies. It is more-or-less universal. Consider the following about the born-again-Christian-led United States: all polls about such subjects show that a great majority of Americans believe in miracles, angels, an afterlife where one will be reunited with one’s relatives and friends, and according to one recent poll, 96 percent believe in God. It is only in the rarefied air of elite academic institutions such as the National Academy of Sciences that one finds a majority of atheists and agnostics. And contrary to popular misconception, Europe is not much different. The reaction to this ubiquity of faith-based superstition, on the part of intellectuals, is best epitomized by Richard Dawkins’s frequent and witty expressions of indignant frustration with and attacks on religion. (He is not always choleric on this issue: one of the more tenderly moving things I have read is Dawkins’s letter to his 10 year-old dauStephen_1ghter Juliet, published in A Devil’s Chaplain as “Good and Bad Reasons for Believing.” If I ever have children, it will be required reading for them.) And I stand beside him in calling attention not only to the silliness of religious superstition, but to the misguidedly anodyne view repeatedly expressed by Stephen Jay Gould and others that religion and science do not clash and can peacefully coexist. They can do no such thing, and one has only to look at the recent court battles over Intelligent Design in Kansas, Pennsylvania, and Delaware to see that (battles similar to the creationist ones Gould was bravely at the forefront of fighting while alive). But until recently, few scientists have put much effort into explaining the ubiquity of religious beliefs. If it is so irrational, then why is religious conviction so widespread?

BoyerpicToday, I would like to report the fascinating work on this question of two young scientists: Pascal Boyer, an anthropologist, and Paul Bloom, a psychologist. Traditional explanations of religious beliefs have tended to fall roughly into two categories: first, there is what might be called the “opiate of the masses” view. This claims that religion is a way of assuaging the pain and suffering of everyday life. Faced with injustice and an indifferent physical universe, people have invented scenarios which help them imagine rewards and punishments in an afterlife, and other ways of injecting meaning into a seemingly purposeless existence. And second, there is the category of explanation of religion which relies on the social benefits which accrue to a society which shares religious beliefs. In addition to providing group solidarity through ritual, these might include the acceptance of uniform moral codes, for instance. On this theory, religious beliefs are seen as memes that are particularly successful because they provide a survival advantage to the groups that hold them (maybe even simply by making people happier). As Pascal Boyer points out in his excellent book Religion Explained, in both cases it is assumed that reason is somehow corrupted or even suspended by the attractiveness (and benefits) of religious belief. [That’s Boyer on the right, above.]

Bloom_1There are problems with these views, and I will, again, just mention two: first, it is clear that people will not just believe anything that provides meaning or promotes social cohesion. There is a very limited type of belief that people are willing to accept, even in religion, and these explanations do not address this selectivity. For example, it would be very hard to convince people of a God who ceased to exist on Wednesdays [Boyer’s example]. The second problem, which has also been pointed out by Steven Pinker, is that both these types of explanation rely on showing that some advantage comes from believing in religion, but this is really a putting of the cart before the horse. We do not generally just believe a thing because having the belief might help us; we believe things that we think are true. If you are hungry, it may help you to believe that you just ate a huge meal, but you will not. As Bloom says in an article in this month’s Atlantic, “Heaven is a reassuring notion only insofar as people believe such a place exists; it is this belief that an adequate theory of religion has to explain in the first place.” [The picture is of Bloom.]

The new approach to explaining religion that Boyer and Bloom (and Scott Atran and Justin Barrett and Deborah Kelemen and others) represent does not see religious belief as a corruption of rationality, but rather as an over-extension of some of the very mental mechanisms that underlie and make rationality possible. In other words, rather than religion having emerged to serve a social or other purpose, in this view it is seen as an evolutionary accident. In particular, Bloom uses some developments in child psychology to shed light on the issue of religious beliefs, and it is these that I would like to focus on now. I cannot here go into the details of the experiments which demonstrate this, but it turns out that one of the things which seems hardwired (is not learned by experience) in young infants (before they can even speak), is the distinction between inanimate and animate objects. Infants are clearly able to distinguish physical things from objects which demonstrate intentionality and have psychological characteristics. In other words, things with minds. In Paul Bloom’s words, children are “natural-born dualists” (in the Cartesian sense). It is quite clear that the mental mechanisms that babies use to understand and predict how physical objects will behave are very distinct from the mechanisms they use to understand and predict how psychological agents will behave. This stark separation of the world into minds and non-minds is what, according to Bloom, makes it eventually possible for us to conceive of minds (or souls) without bodies. This explains beliefs in gods, spirits, an afterlife (we continue without bodies), etc. The other thing that babies are very good at, is ascriptions of intentionality. They are very good at reading desires and intentions in animate objects, and this is necessary for them to function socially. Indeed, they are so sensitive to this that they sometimes overshoot and even ascribe goals and desires to inanimate objects. And it is this tendency which eventually makes us animists and creationists.

Notice that while previously most people have proposed that we are dualists because we want to believe in an afterlife, this new approach turns that formulation around: we believe in an afterlife because we are born dualists. And we are born dualists to be able to make sense of a world which has two very different kind of entities in it (in terms of trying to predict what they will do): physical objects and things with minds. Bloom describes as interesting experiment in which children are told a story (with pictures) in which an alligator eats a mouse. The mouse has clearly died, and the children understand this. Bloom says:

The experimenters [then] asked the children a set of questions about the mouse’s biological functioning–such as “Now that the mouse is no longer alive, will he ever need to go to the bathroom? Do his ears still work? Does his brain still work?”–and about the mouse’s mental functioning, such as “Now that the mouse is no longer alive, is he still hungry? Is he thinking about the alligator? Does he still want to go home?”

As predicted, when asked about biological properties, the children appreciated the effects of death: no need for bathroom breaks; the ears don’t work, and neither does the brain. The mouse’s body is gone. But when asked about the psychological properties, more than half the children said that these would continue: the dead mouse can feel hunger, think thoughts, and have desires. The soul survives. And children believe this more than adults do, suggesting that although we have to learn which specific afterlife people in our culture believe in (heaven, reincarnation, a spirit world, and so on), the notion that life after death is possible is not learned at all. It is a by-product of how we naturally think about the world.

While it is this natural dualism that makes us prone to belief in an afterlife, spirits, gods, and other supernatural entities, it is what Pascal Boyer has called a hypertrophied sense of social cognition which predisposes us to see evidence of purpose and design even when it does not exist. Bloom describes it this way:

…nascent creationist views are found in young children. Four-year olds insist that everything has a purpose, including lions (“to go in the zoo”) and clouds (“for raining”). When asked to explain why a bunch of rocks are pointy, adults prefer a physical explanation, while children use a functional one, such as “so that animals can scratch on them when they get itchy.” And when asked about the origins of animals and people, children prefer explanations that involve an intentional creator, even if the adults raising them do not. Creationism–and belief in God–is bred in the bone.

As another example of attribution of causality to intentional agents where there are none, consider the widespread belief in witches. In an article entitled Why Is Religion Natural?, Pascal Boyer writes:

Witchcraft is important because it seems to provide an “explanation” for all sorts of events: many cases of illness or other misfortune are spontaneously interpreted as evidence for the witches’ actions. Witchcraft beliefs are only one manifestation of a phenomenon that is found in many human groups, the interpretation of misfortune as a consequence of envy. For another such situation, consider the widespread beliefs in an “evil eye,” a spell cast by envious people against whoever enjoys some good fortune or natural advantage. Witchcraft and evil eye notions do not really belong to the domain of religion, but they show that, religious agents or not, there is a tendency to focus on the possible reasons for some agents to cause misfortune, rather than on the processes whereby they could do it.

For these occurrences that largely escape control, people focus on the supernatural agents’ feelings and intentions. The ancestors were angry, the gods demanded a sacrifice, or the god is just cruel and playful. But there is more to that. The way these reasons are expressed is, in a great majority of cases, supported by our social exchange intuitions. People focus on an agent’s reasons for causing them harm, but note that these “reasons” always have to do with people’s interaction with the agents in question. People refused to follow God’s orders; they polluted a house against the ancestors’ prescriptions; they had more wealth or good fortune than their God-decreed fate allocated them; and so on. All this supports what anthropologists have been saying for a long time on the basis of evidence gathered in the most various cultural environments: Misfortune is generally interpreted in social terms. But this familiar conclusion implies that the evolved cognitive resources people bring to the understanding of interaction should be crucial to their construal of misfortune.

To state it one more time, the correct explanation for the ubiquity and stability of religious beliefs lies not in postulating rash abandonments of rationality for the gain of some social or mental benefit, but rather, such superstitious beliefs are firmly rooted in our ordinary mechanisms of cognitive functioning. In addition, these beliefs are parasitic upon mental systems which have evolved for non-religious functions, but which have similarities to religious concerns: for example, fear of invisible contaminants (religious rituals of washing), or moral intuitions and norms (religious commandments).

Obviously, to see this sort of naturalistic account of religious and other supernatural beliefs as an endorsement or defense of religion would be to commit a naturalistic fallacy of the worst sort. What Boyer, Bloom, et al have done is to point out a weakness in our cognitive apparatus, which is a by-product of the way our mental systems have evolved. This is analogous to the well-known systematic weaknesses that people show in thinking about probabilistic phenomena (shamelessly exploited in Las Vegas and Atlantic City, not to mention the highly deplorable state-run lotteries). Having discovered an accidental source of incorrect beliefs within ourselves, we must struggle against it, and be ever-vigilant when thinking about these sorts of issues.

Have a good week!

My other recent Monday Musings:
Posthumously Arrested for Assaulting Myself
Be the New Kinsey
General Relativity, Very Plainly
Regarding Regret
Three Dreams, Three Athletes
Rocket Man
Francis Crick’s Beautiful Mistake
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President

Selected Minor Works: Taxonomy as a Guide to Morals

Justin E. H. Smith

There is a long tradition in philosophy, going back at least to Epicurus, of allowing examples drawn from the domain of sexuality to serve in the analysis of eating, and vice versa. Sometimes this amounts to sloppiness, but often one can gain insight. Consider the photographs of peaches or cherries that make their way onto the covers of books in the erotica genre. These might tromper l’oeil, for an instant, but when we see what the photo is actually of, we are inclined to think: how clever, that peach looks like a naked woman from behind. Yet publishers of erotic literature dare not attempt the same trick with a suitably ambiguous photograph of a goat’s haunches. An erotic experience caused by a cherry is a fundamentally different sort of experience than one caused by a goat. This difference might, on its own, lead one to think that, similarly, a culinary experience with a cherry and one with a goat are two very different things as well. It is also interesting to note that in poetry allusions to fruit work well as erotic metaphors, while mention of ‘meat’ in the same context would be not erotic, but pornographic.

But it is zoology, and not phenomenology, that informs the dietary rules of contemporary ethical eaters. Most vegetarians today seek to index their dietary rules to Linnaean taxonomy. A moment’s reflection will show this to be an odd project. To eat corn and mushrooms, but not beef and mussels, only because, as we inhabitants of the post-Linnaean world know, cows and marine invertebrates are grouped together in the kingdom “animalia,” whereas plantae and fungi are different kingdoms altogether, is, one might think, to put a bit too much faith in the ability of scientific taxonomy to reflect reality, and, what’s more, to serve as a guide to practice. When it comes to dietary decisions of this sort, surely folk taxonomy is a more reliable guide. The Karam of the New Guinea Highlands, to cite one of many examples available from the anthropological literature of kingdom-mixing in folk taxonomy, class certain mushrooms with animals, in virtue of the texture of their ‘meat’. nd what folk taxonomy tells us is that cows are more like humans than they are like scallops, and scallops are more like corn-on-the-cob than they are like cows– the intuitive appropriateness of the phrase ‘frutti di mari’ has, after all, survived three centuries of taxonomic precisification.

The relevant likeness, again, has nothing to do with arguments for or against moral status based on neurophysiological evidence. Rather, it has to do with the instruments and methods employed to kill the creature, the amount of blood spilled, and the sense of the relative specialness of the meal that results from this killing. Though the taxonomies are very different, in all cultures, in addition to the class of entities that cannot be killed and eaten under any circumstances –pets and people (at least the friendly ones, again, as we will see below), and usually negatively social creatures such as rats—, there appears to be a certain class of entities cordoned off from the rest, distinguished by the fact that members of this class cannot be casually killed and eaten. They can be killed and eaten, but this will require some kind of communal to-do by which their sociocosmic significance –or what we would call their ‘moral status’– is acknowledged.

We are led astray not just in trying to index ‘moral wrongness’ to the innate cognitive and sensitive capacities of the beings in question, but that we are led astray even in thinking that the question of what we are and are not to eat has much to do with ‘moral wrongness’ in the sense in which philosophers understand it. Rather, rules about what can be eaten, and under what circumstances –never social animals like pets or rats, sometimes large game, fruits and nuts more or less anytime—seem to involve a few basic, evolutionally ingrained, cross-cultural rules, and on top of these a good deal of culturally variable rules that nonetheless within the culture feel as inexorable as the basic ones. Eating, as the Epicureans suspected, thus parallels sexuality in significant ways—the mother-son incest taboo is universal, but whether sex with your second cousin, or your second wife, or outside of marriage, or during menstruation, is ‘morally wrong’ will differ from place to place. All of these practices are capable of being morally wrong, but only in the etymological sense of ‘moral’: pertaining to the practices of a group.

The classicist and philosopher G. E. R. Lloyd has argued in his Magic, Reason, and Experience (Cambridge, 1979), that often it is not just difficult but impossible to determine when, in ancient texts, some reference to “purification” or “cleansing” is meant in a medical, and when in a moral-religious, context. He notes that the ambiguity arises only because we ourselves are intent on separating the two usages, whereas the Greek writers themselves may not have seen any need to do so. He cites Mary Douglas’s work in a more general anthropological context, which shows convincingly that “notions of the ‘clean’ and the ‘dirty’ usually reflect fundamental assumptions concerning the natural, and the moral, order.” It would be useful to bear in mind the ease with which naturalistically understood rules about ‘what one does’ and moral proscriptions are elided, and not to assume that we are radically different from the ancient Greeks or the Lele of the Congo in this regard. And for us, as for other cultures, there are presuppositions about what one may fitly do with an object that serve to constitute our very concept of the object, and that these must precede any explication of our moral commitments vis-à-vis that object. On Douglas’s approach, the moral proscription against eating something would be nothing more than an ad hoc rationalization of the fact that some potential food item belongs to the class of things that are ‘not to be eaten’. Yet the tendency in philosophical discusssions of vegetarianism has been to presume that we can meaningfully distinguish between ‘hygienic’ and ‘moral’ considerations that might give form and meaning to a person’s vegetarianism, as though hygiene had nothing to do with morality, as though the pretheoretical perception of an entity’s belonging to the class of edibles or inedibles had nothing to do with the way we subsequently give reasons for why we eat the things we do and not others.

I do not know if meat-eating is something humans ‘ought’ to be doing. I suspect the answer to this question has more to do with primatology than with moral philosophy: are we the sort of primate that eats meat? And with anthropology: are there human cultures that class all of what zoology places under the heading ‘animalia’ under the heading ‘inedibles’? The unwillingness of people on either side of the debate to consider the question in these terms surely is not doing any animals any good.

Poetry and Culture

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Benjamin Britten: Music and poetry, attendant muses at the grinding gears

Leonard Bernstein once said of Benjamin Britten that his music was ‘dark, there are gears grinding and not quite meshing . . . making great pain’. That seems true. Certainly, there is no transcendence in Britten, as there is in Elgar, for example. The Dream of Gerontius is a work of faith and Christian journeying, but Britten was having none of Cardinal Newman’s agenda. Nor is there that charged explosion of sensuality Elgar achieved with In The South, though the composer does let his hair down occasionally, as in the Four Cabaret Songs or the finale  of Spring Symphony. Even Elgar’s melancholy seems schooled in hopefulness by comparison with Britten. Britten is simply dark, and anxiety is close to the surface. Neither could Britten luxuriate, as Delius does, or glitter with  delight, like Walton. But what Britten does have, something those other composers do not have to same degree, is the most exquisite ear for poetry and the ability to set is superbly. To a very real degree, it is poetry that gets Britten through his dark nights of the soul.

Britten was lucky that one of his earliest friendships was with Auden, and, naturally enough, being with a poet of this stature couldn’t help but rub off on a sensitive and intelligent personality like Britten’s. There are many early settings of Auden, On This Island being one of the best known. Britten and Auden had a falling out later on, but I don’t think Britten ever forgot what he learnt from Auden about the intimacies possible when music and poetry work in harmony. True, Britten was rather scornful of Auden’s and Kallman’s text for Stravinsky’s The Rake’s Progress, but that scorn was based on a profound working knowledge of how to set dramatic texts for opera. Britten showed he could do it marvellously well in A Midsummer Night’s Dream. The other opera librettos may not always be settings of poetry, but they are certainly poetic. When Peter Grimes sings ‘Now the Great Bear and Pleiades . . .’ it is certainly something poetic we are hearing. The fact that is dramatic too just goes to show how effective Britten’s settings could be when his imagination was fired by a suitable subject.

I was fortunate enough to once meet Britten at the 1970 Adelaide Festival. He was one of the first composers I started collecting on LP. People of a certain generation remember those Decca recordings with their texts in print size that made them easy to read, unlike today’s CD equivalents. Well, I was a particularly green student at the time, but I knew Britten had been interested in setting King Lear, and I asked him about that. There was an ominous silence, but I often think that would have been a more suitable final work for a composer of his temperament, rather than Death In Venice with its chilled ecstasies and gamelan playfulness. It’s one of those ‘what if’ questions we ask about artists we like. Fellini’s Mastorna project or Wagner’s proposed final symphonies also come to mind.

One of the first recordings I bought had Les Illuminations on it. I didn’t understand the full ramifications of the work at the time, but could feel Britten’s identification with the text. Somehow, music and text are integrated naturally, instinctively. You could say the same thing of all of Britten’s setting of poetry texts. There are no false notes. There is a real marriage of true minds, the muses of music and poetry meeting equally on Helicon, neither subsuming the other, each requiring the other’s succour.

The War Requiem is a real act of transfigurative creative feeling. There had been a kind of precursor when Mahler, in his Symphony No 8, set the Latin hymn Veni Creator Spiritus and then completed the work with the last part of Goethe’s Faust, but Britten was doing something more adventurous, at least from a literary viewpoint. Since Britten cannot find the transfigurative moment to redeem the deaths memorialised on the dedication page of the War Requiem, or fill Coventry cathedral with ‘Take me away’ chords out of Gerontius, he does something quite original. He inserts the poetry of Wilfred Owen throughout and, just when we might be expecting the summons to a higher cause, what we get is the sheer awfulness of war, the ‘pity of war’, the imagined reconciliation in hushed remonstrance in ‘Strange Meeting’. To think that this work was once regarded by a certain section of the musical avant-garde as the white elephant of British music speaks of their failure to react creatively to poetry in the way that Britten did so effectively in this work.

However, the composer pacifist still had to deal with his own violent demons, and poetry seems to be one of the ways he accommodated what must have seemed, in the wake of the Second World War with its apocalyptic severances, the failure of art to prevent the facts of the Holocaust and the boundless dead. Britten played with Menuhin at the end of the war for survivors of the concentration camps, and the memories he brought back from that time prompted the song cycle he composed not long after, The Holy Sonnets of John Donne. The muscular confrontation with the fact of suffering brought forth a cycle in which Donne’s verse starkly counterpoises the music. The counterweight to this confrontational style is the calm and lucid settings of Shelley, Tennyson, Coleridge, Middleton, Wordsworth, Owen, Keats and Shakespeare in Nocturne, where Britten finds the kind of equipoise so often missing elsewhere. On the edge of sleep, or in the idea of sleep itself, the composer finds repose. To use Yeats’ words, the ceremony of innocence may be drowned (though not in most of the works written for children such as The Young Person’s Guide to the Orchestra), but the memory that one was once innocent—Britten reaches at that with all his yearning. You still wake to find the blood and pain of the world, but during the cycle one has been enchanted, a little. Some of Puck’s juice has been sprinkled in our eyes too. The moment passes, but the moment was beautiful. And one doesn’t forget that it was real. Britten has made it so. Poetry has helped the composer get there. Perhaps, essentially, Bernstein is wrong. The gears do mesh, because if they didn’t there would be no music, no memorability, no greatness of spirit, which there clearly is in these compositions.

Britten was not a parochial composer, for all the jokiness about ‘Addleborough’ (Aldeburgh). The languages set include French, Russian, German, Italian, American and British poets. His sensitivity encompasses Soutar and Hardy, Michelangelo and Jonson, nervous fibres reaching out for any memorable words to centre what seems, at heart, a certain pessimism. If one takes account of all the poetry settings Britten composed music for, and thinks of the literary imput from Crabbe, Melville, James and Mann, and others, then one really is prompted to consider Britten one of poetry’s, and language’s, most eloquent advocates. A composer as subtle and as various in his or her choice of texts, and the ability to set them as memorably as Britten: the muses were here in agreement, and they bestowed their graces liberally, even though darkness is clearly visible and any joy achieved is hard won.

Life of a Cannibal

There is Quiet Street. On the bus, on our way to play sports on Randalls Island or Wards Island (depending on the season), we cross the first leg of the Triborough Bridge. Before we get to the Triborough, there is Quiet Street. It is 124th, or 126th. Maybe 120th. I can’t remember. The bus turns right off 3rd Avenue, and we must be silent. We are schoolboys, grades 7, 8 and 9, and we are as loud and shitty as you would expect. It’s a comfortable bus, a coach. Luggage racks, armrests that fold back and forth into the seat, a toilet in the back. No video screens. It’s the early eighties, and Mr. von Schiele or Mr. Trauth is standing up front by the bus driver, flexing his immense forearm. Mr. Trauth stutters. Mr. von Schiele’s first name is Per. We turn right off 3rd Avenue and everyone shuts up. We look out the windows. Someone once threw a rock at us on this street, and now we call it Quiet Street and we don’t talk. Nick and I created a sign language. I can’t remember if we did it because of Quiet Street, but it seems unlikely that we’d learn how to communicate with our hands just for one block.

There is the diner on Montague Street, where Beth and I order grilled cheese. It’s our place. Happy Days Diner, sunk into the street, full of irritable waiters and bad food. It’s next to a newsstand, and it’s got tables outside, but I’ve never seen anyone sitting at them. I write a poem for Beth that mentions the bright orange cheese of Happy Days, and the fact that she calls the subway ‘The Metro’. A euphemism, I call it. But she’s like that, sweet like that. She is from Boston, not Washington.

There is the corner of 91st and Park. I stand on the front steps of Brick Church with my choir and sing carols as Carnegie Hill’s Christmas trees light up in unison. We are golden-throated, I assure you. I sing weddings and funerals for hire. I am bluff tonight, familiar and smiley with my choir mates. It’s Christmas soon, and I am special in front of everyone’s eyes, and the air is crisp on my skin. It makes me feel confident. There will be a party afterwards.

There is the therapist’s on 82nd Street. I get to skip out on work for this, take off at 2:30, get stoned quick in the park maybe, and head for her office. She finds me deeply attractive, and she’s baffled by me. She’s not smart enough, of course, to make this worthwhile, but it’s a thing I’m doing. I enjoy discussing my thoughts and feelings. It amuses me to impress her with my complexity. I like pacing outside her building, too, smoking a cigarette. Her window’s right there, and I wonder if she ever looks at me before our appointment. Lots of young women are out at this time with their dogs and their children. I’ve had trouble walking lately. I’m aware of every step I take, and I’m aware that you’re aware, and the anxiety of performance is hobbling me. I’m a little shaky. I tell her about it. When the time comes for me to end the relationship, after a year of twice-a-week meetings that were, from the outset, futile, I am regretful but firm. After I close her office door for the last time and make my way across her vestibule, I hear her scream in frustration.

There is the northwest corner of 19th and 5th, catty-corner from my office. I lean against a building and smoke a cigarette. An SUV full of young black men drives by. They’re hanging out the windows. “YOU stand THERE,” one of them calls out to me, pointing authoritatively. I give him the finger and say “Fuck You!” cheerfully, a wide smile on my face. They pull over at the southwest corner of 19th and 5th and leave their hazards on. There are four or five of them–big, healthy gents. They surround me. They would like for me to apologize. I try to explain that it’s ridiculous for them to tell me to do what I am already doing, but they don’t want to listen. Eventually, in a tone I have carefully modulated to be sarcastic enough to spare my pride but not so sarcastic that I will get punched in the face, I say I’m sorry. 

There is Union Square. Jane meets me in the park at lunchtime. She is much younger than me, and has spent the day being admired by men. She says, “Boy, all you need to do is wear a tight skirt!” and I want to hit her for being coy. She asks about Beth. I shrug. I am driving her back to Vassar. We will spend one night in a wood-paneled motel room in Poughkeepsie, and then she will put on crappy jeans and a loose t-shirt and disappear. 

There is Fort Greene Park, where scores of dogs run off their leashes in the mornings before 9. I cut across the grass and worry vaguely that I am stepping in their shit. Dogs do smile. It’s painful to see them each morning, chasing each other, looking back to check that their people are watching. I am going to work in my worn shoes, hair wet with gel, and I am full of dread.

monday musing: df

Flying in at night affords a remarkable sight. You can’t imagine such a blanket of lights. And they end so abruptly at the ever widening borders of the city. Beyond them is nothing, just the blackness of the land at night, as if the entire cosmos were merely lights and void. It’s beautiful and big and strange.

***

They just call it DF, for Distrito Federal. Those from the US know it as Mexico City if they know it at all. And my general impression is that we, Americans, don’t know much about Mexico and don’t care very much that we don’t. The very fact that we call ourselves Americans (what about the rest of the Americas?) is kind of a give away to that basic neglect. If you’ve ever known any Canadians you know that the neglect is felt up North too. Come to think of it, if you’ve ever been anywhere else on the entire globe you’ve probably realized that Americans aren’t generally recognized for their great knowledge and concern for the rest of the world. And that is nothing new. Everyone knows it. No one much knows what to do about it. And if the center of world power really does shift toward China in the generations to come we’ll all find out that xenophobia and self-centeredness weren’t invented in America either.

But still, it sucks. Being in DF just last week I was struck with a sense of shame at my own lack of knowledge and paltry understanding of all things Mexican. Americans on the whole, I’ll wager, tend to think of Mexico as dusty towns where nothing is going on, as a kind of no man’s land that simply produces streams of human beings headed for the borders of Texas and New Mexico and California, of poverty and corruption with a dash of violence and drugs. And those things exist. The history of Mexico from the Conquistadors in the 16th century up until the present is partly a history of continuous political upheaval, economic turmoil, and sometimes just straight up weird shit. And through it all the US, to its enduring shame, played little role but to take advantage of the bad times whenever it seemed convenient (see the snatching of a good chunk of Mexico during the 19th century with as shabby a causus belli as has been offered since, . . . well, I guess they’re usually pretty shabby).

Which brings up a number of questions that ought to weigh on the mind in this newish century. Why is it that Mexican history followed a course so different from American in terms of political stability and economic development (and this applies to much of Latin America as well)? And why have the burdens and inequalities of Mexican history maintained themselves so stubbornly against the proposed solutions from all sides of the political spectrum? One answer, of course, is that the American colonists achieved two things simultaneously that proved difficult to do in the Mexican context. The Americans maintained a kind of political and economic continuity with the old country and decisively achieved their own independence at the same time. Mexico, by contrast, continued to be seen as a cash cow for Spain much longer and the process of independence was much more tumultuous. The hacienda system by which the Spanish colonists extracted wealth and labor was so brutal and retarding to political and economic development it boggles the mind. It is still having its effects. In the 19th century, the Mexican constitution was a document to re-write at ones leisure after the seemingly endless coups, revolutions, dictatorships, upheavals, and so forth.

On the other hand, US stability was achieved at a price, a pretty terrible one. The indigenous populations of North America were essentially wiped out. Things were less complex because they were made that way . . . by a genocide. There was genocide in Mexico too. The collapse of the indigenous populations through disease and maltreatment at the end of the 16th century was staggering. But the population and the history wasn’t wiped away completely. There was too much there.

***

We bought a huge bundle of bright orange flowers and I trudged into the cemetery carrying them on my shoulder along with a stream of thousands of families on a Wednesday morning, bright sun, Dia de los Muertos. The great tombs to Mexico’s modernist poets and artists and intellectuals are like tombs to modernism itself with their spheres and blocks and slabs and geometric severity. We put flowers on some of the great ones. But that is not the most compelling part of the cemetery by a long shot. In truth, we weren’t at all prepared for the human emotion of it. Because the Day of the Dead in Mexico is about bringing people back to life again, if but for a moment. The care with which whole families are washing down and cleaning up the tombs of their loved ones becomes almost overwhelming. They are preparing whole meals for themselves and for the dead. They have hired Mariachi bands to play the favorite songs of the dead. They have resurrected the loves and needs and desires of the people they have lost. It is done with a sense of celebration that seems appropriate to the act of making life out of death. But a sadness is in the air too. Because the dead are dead. And if you can watch these families in their tender acts at the graves of those they have lost without your throat tightening up then you just aren’t paying attention.

***

In America, in the US version of America, it is easy to think of the New World as really a new world. The civilizations of the indigenous populations of North America were relatively easy to wipe away. They weren’t as complicated, intricate, and urban as the civilizations of the Teotihuacans or the Aztecs or the Mayans, to name a few. The Teotihuacan pyramids outside of DF are the ruins of a civilization that was not screwing around. It was big and organized and complex and it mobilized vast amounts of surplus labor to build some truly stunning, crazy crap. God knows it must have been awful to have ended up in the slave labor teams that carried the stones that built these monuments.

Ironically, one of the reasons that the Conquistadors were able, with such small numbers, to defeat such massive civilizations was because the peoples of Mesoamerica were already so busy exploiting the living shit out of one another. Divide and conquer. Play grievance off of grievance. Of course, once the Aztecs were laid low the indigenous peoples of Mexico down to the tip of South America were exposed to a kind of brutal oppression that would have made the Aztecs, with their rather curious need to pull human hearts from people’s chests at the tops of their temples, look rather touchy feely. It is difficult to think of all the human misery without feeling sick. The rare figures like the Spanish theologian Bartolome De Las Casas who argued in the 1540s that the Indians might, in fact, be human beings worthy of being treated as such, were notable precisely in how much they were the exception to the rule.

But there was too much of a civilization in Mexico, too many practices and beliefs and ways of life to wipe them all away completely. One of the most amazing things about being in Mexico, DF or elsewhere, is in realizing the degree to which these identities still survive in various ways. They are still part of the self-understanding of Mexicans today, especially now, after all the independence struggles and the way these struggles reached back into the pre-Columbian history in order to forge a new sense of nationalism that was not simply borrowed from Spain. The Aztecs are still around, kind of. Quetzalcoatl lives, sort of.

***

The Metro in DF smells exactly like the one in Paris, which is mildly disconcerting. It turns out the French built it. It’s a pleasure to ride. But I think I prefer the microbuses. You can take one for two and a half pesos (10.75 to the dollar). The back door is usually swinging open as the minibus putters along. People jump on and off almost as if it’s a Frisco streetcar. You can really get a feel for the vast seemingly limitless city in the microbus. DF isn’t a beautiful city in the standard sense of the term. It is too ramshackle and helter skelter for that. It includes the fanciest of contemporary architecture and the most miserable in shanty town construction. You’ve never seen people as rich as you can see ‘em in DF and you can find people with absolutely nothing too, literally dirt poor. The streets wind every which way without much reason, across spindly overpasses and back down into the heart of tree lined neighborhoods and then, whoosh, into a giant square or roundabout that spits you into the colonial center with cobblestone roads lined with old world structures and the occasional 16th century marvel. The Zocalo is a square that dwarfs anything of human scale. It is a square built to say something, though I’m not entirely sure what. If nothing else, it says, “we can do some serious shit here, too.” The cathedral explodes in the middle of the square in a fit of Churrigueresque Baroque that makes regular Baroque look like it was holding back. And then maybe the road keeps going out away from the center again, through neighborhood after anonymous neighborhood until the structures drift away into hastily built concrete boxes no one had time enough to paint. And those drift away into things thrown up with even less time and less material, merely the stuff that could be scrounged. And now the streets are barely streets, just winding dirt passages between shacks of all manner and size. And then those thin out as well and there is only scrubby brush and mild rolling hills and the dull rumble of the city whirring away in the distance.

***

In Samuel Huntington’s most recent pique of civilizational hysteria he lamented that Mexicans aren’t doing as good a job of assimilating into US culture as other immigrant groups have. This, he surmises, constitutes some kind of threat to the integrity of the American project. I’d say he has it ass backwards. There is a historical opportunity here to become a little more Mexican and I think we ought to take it up. It would be a start, at least, in redeeming ourselves after centuries of being an overall shitty neighbor in every conceivable way. Or look at it from a more selfish point of view. There is too much interesting shit about Mexico and Mexicans to let them get away with keeping it all to themselves. Becoming a little more Mexican would be a way to take better advantage of everything the North American continent has achieved in the way of human beings and the funny dumb amazing things they do. That’d be the chingon thing to do, becoming a little more Mexican.

Dispatches: Divisions of Labor

This Wednesday, graduate students at New York University will begin a strike with the intention of forcing NYU’s administration to bargain with their chosen collective agent, the union UAW/Local 2110.  (Full disclosure: I am one such, and will be participating in this action.)  This situation has been brewing for months, as the union’s contract with NYU, dating to 2001, expired in August.  That contract was something of a historic event, since it recognized the right to unionize of graduate students at a private university, for the first time in U.S. history.  That decision was helped along by a (non-binding) arbitration by the National Labor Relations Board, which ruled that NYU graduate students were workers in addition to being students.  Owing to turnover in the Board, however, and the Bush administration’s appointment of anti-labor replacement members, the Board has since reversed its precedent-setting decision in reference to a similar dispute at Brown University.  Though this is not a binding decision, and NYU is under no compulsion to derecognize its graduate student union, this is exactly what it has done, albeit while attempting to produce the impression that they tried and failed to come to an agreement. 

In actuality, the administration did not come to the bargaining table until August, at which point they issued a ultimatum to the graduate student union, insisting that they agree to a severely limited ‘final offer’ in forty-eight hours.  Made in bad faith, this offer was of course rejected, as it would have been impossible even to organize a vote of union members in the span allotted.  But it did allow President John Sexton and his administration to claim to have made an offer and been refused, which they have lost no opportunity to repeat in a series of inflammatory emails and letters to the student community.  Last week, in the wake of the union’s announcement that members had voted by approximately eighty-five to fifteen percent in favor of striking, the university administration sent yet another such communique, this time from Provost David McLaughlin, with the subject line ‘UAW votes to disrupt classes.’  Strategically, the administration seems to believe that by avoiding all reference to graduate students, and instead identifying the source of ‘trouble’ as ‘Auto Workers,’ the undergraduates and larger community might be convinced that the strike is simply the deluded power grab of a few dissatisfied individuals in league with an extraneous group.  Yet the transparency of the university’s rhetoric has had the opposite effect: undergraduates and faculty alike have been radicalized in support of graduate students.  My own students, for instance, have been extremely understanding, comprehending perfectly that graduate students’ teaching, for which a paycheck is received, tax and social security having been withheld, is work.

Politics, it is said, makes curious bedfellows.  One such pairing has occurred as a result of the current NYU situation.  The physicist Alan Sokal is best known for his 1996 article, published in Social Text, attempting to expose the spurious nature of references to math and science in the work of high theorists such as Derrida and Lacan.  His article, which purported to demonstrate twentieth-century physics’ confirmation of the anti-realism of post-structuralist theory, was duly accepted and published in a special issue of the journal on ‘Science Wars.’  After his revelation that the article was a ‘hoax,’ a small repeat of the two cultures conflagration ensued.  Responses abounded, including a rebuttal from Andrew Ross, professor of American Studies at NYU and the co-editor of the issue.  The entire episode has received its most detailed accounting and most rigorous intellectual genealogy (tracing the roots of this debate back to the development of logical positivism) in an article by yet another NYU professor, John Guillory.  That piece, ‘The Sokal Affair and the History of Criticism,’ is well worth reading, not least for its clarification that the stakes in the two cultures debate are not necessarily related to one’s position vis-a-vis ‘postmodernism’ or cultural studies, which, as Guillory convincingly demonstrates, has no fixed relation to particular political stances.  The current labor strife at NYU reconfirms this analysis, as Sokal and Ross find themselves on the same side of the barricades as members of Faculty Democracy, the faculty organization urging the NYU administration to bargain with the graduate student union.

Sokal has written this clear-eyed summary (to which Robin previously linked) of the issues involved, and his commonsensical tone is a much-appreciated palliative in the midst of rhetorically overheated statements issuing from many quarters.  Perhaps most importantly (and most ironically for someone who has been lambasted as an ‘unreconstructed’ leftist), Sokal points out that the paternalism of the administration’s position should be rejected.  Whether or not one considers the graduate students to be right in their cause, he points out, their democratically decided resolution to collectivize in order to negotiate contracts should be respected.  Sokal’s distaste for the increasingly rapid transformation of the university into an institutional substitute for parental duties (and a remedial solution to the decrepitude of public high school education) is one I share.  I would add that the current impasse is more a matter of structural conflict than of political sympathy.  Private universities as they exist today depend on a pyramidal structure: a large number of graduate students at the bottom of the labor force are needed to perform much of undergraduate teaching, while at the top of the pyramid are ever fewer tenured faculty.  Even these can be further divided into ‘stars’ who command greatly disproportionate income while having few teaching responsibilities, and the lower order of adjunct professors who perform much of the remedial education in such subjects as composition.  This system necessarily produces many more credentialed Ph.D.’s than the labor market can employ at the higher levels, which in turn means that many graduate students spend years teaching for a pittance without making it to the security of a tenured position.  Hence the pressure on this beleaguered strata to unionize, so as to ensure a modicum of stability of salary and benefits. 

Though they are a transient class, passing through degree programs, as opposed to a permanent workforce, graduate students have thus come to bear a very large portion of the daily labor of teaching undergraduates.  Interestingly, what the commotion about this strike has appeared to ignite at NYU is a debate about collectivism.  While many are fully willing to grant a certain ethical status to the picket line, and so defer to the right to strike that has been the hallmark victory of the labor movement, the union’s opponents (be they faculty or students) tend to retreat to personal responsibility as the ground on which to base such decisions.  Thus does American individualism reappear in the debate, as usual licensing those for whom ethical imperatives are always imposed from without, rather than perceived from within.  Combined with the detached, analytical impulse that intellectual work requires, this produces a strong ideological propensity for members of this particular class of workers to dissent from counting themselves as part of a collective organization, with the exception, of course, of their belonging to the university itself.  Membership in the university, however, is mystified by the institution’s self-image as the social location outside of or beyond corporate culture and other more baldly hierarchical sites of work.  That the intellectual and editorial freedom that the university trumpets as its role to protect might conflict with the conditions under which that freedom is maintained is a paradox that remains all too often unclear to the participants in this debate.  Whether the temptation to fall back on just this founding myth of the university will prove to be the union movement’s undoing, we shall discover in the days and weeks ahead.

Dispatches:

Local Catch
Where I’m Coming From
Optimism of the Will
Vince Vaughan…Eve Sedgwick
The Other Sweet Science
Rain in November
Disaster!
On Ethnic Food and People of Color
Aesthetics of Impermanence

Rx: The War on Cancer

I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal. America has long been the wealthiest nation in the world. Now it is time we became the healthiest nation in the world.–-President Richard M. Nixon in his 1971 State of the Union address.

I. THE SITUATION AT PRESENT

The famous Greek doctor and the author of the Hippocratic oath defined cancer as a disease which spreads out to grab parts of the body like “the arms of a crab”. What proves fatal for the victim is the spread of the cancer cells beyond the site of origin, and in this sense, it was a thousand years later that Avicenna of Baghdad, noticed that “a tumor grows slowly and invades and destroys neighboring tissues”. Faithful to its name in more ways than could possibly have been anticipated by Hippocrates, the disease which has launched the $200 billion “War on Cancer” in America continues to spread, invading the lives of almost every family. Based on available data, there were 10.9 million new cases of cancer worldwide, 6.7 million deaths, and 24.6 million persons who had been diagnosed with cancer in the previous five years. Like in many other areas, unfortunately the USA is leading this one as well. More than 1.5 million Americans develop cancer each year claiming some 563,700 lives, killing more Americans in 14 months than the combined toll of all wars the nation has ever fought (a new cancer is diagnosed every 30 seconds in the United States, about 1,540 dying each day from their disease). A look at the worldwide incidence of cancer raises some puzzling issues, especially related to the unexpectedly high incidence of cancers in the USA:

Clip_image001

If we ascribe the increased incidence solely to the aging of the population, then why is the incidence so much higher in the USA than in some of the developed countries where life expectancy is comparable? On the other hand, if lifestyle is more important as suggested by the association of smoking and cancers of the lung, then why is this incidence not equally high in countries where people smoke at least as much as in the USA? One answer could be that the smokers in countries such as South America or India do not live long enough to develop cancer. However, the incidence of lung cancer in Sweden with an average life expectancy of 80.3 years is less than half of that in the USA which has a life expectancy of 77.4 years ((22 versus 55.7 per 100,000 respectively) even though 22% adults smoke in both countries. This suggests that lifestyles may be important, but that smoking may not be the only important factor.

Table 1. Approximate incidences of daily smoking among adults in different geographic areas.

SwedenEuropeEastern
Europe
the Middle
East
TurkeyThe rest of
Asia
South
Amerika
Africa
Man224660416344-544029
Female2426288244-7214

Källa: Tobacco Alert. Geneva: World Health Organization, Programme on Substance, 1996

Clip_image001_1

The idea that genetic predisposition to cancer may have something to do with these high American numbers has been largely laid to rest by the experience of the Japanese who immigrated to America. Both the incidence of cancer, as well as the types of cancer, were quite different between the fresh immigrants and their American counterparts, however these differences disappeared in the second generation Japanese Americans who adopted the local lifestyle.

II. PROBLEMS IN THE FIGHT AGAINST CANCER

President Nixon declared the War on Cancer 34 years ago using the 100 words which stand as an epigraph for this essay. Acknowledging the fact that the tools necessary to accomplish the task were missing, the mandate was to invest money in research and apply the results to reduce the incidence, morbidity and mortality from cancer. After spending approximately $200 billion (if we add up the taxes, industry support etc) since 1971 on this war, 150,855 experimental studies on mice and publication of 1.56 million papers, the results can best be summarized in this one graph:

Clip_image001_2

While deaths from heart disease have declined significantly in the last 30 years, the percentage of Americans dying from cancer, save for those with Hodgkin’s, some leukemias, carcinomas of the thyroid and testes, and some childhood cancers, is about the same as in 1950. For the last twelve years, there is a 1% decline annually in cancer mortality. Early detection has had some significant impact, but once the cancer has spread, the outcome has generally not changed for the last half a century.

As someone who has been directly involved in cancer research since 1977, and obsessed by it for longer, I am a first hand witness to the by now familiar cycles of high expectation and deflating disappointments which have been its hallmark for the last three decades. Because the stakes are so high, both in terms of life-death issues as well as the staggering nature of the finances involved, emotions tend to run high on all sides. In this, and a series of subsequent essays, I would like to summarize some of the rather obvious reasons why this war on cancer has not manifested the anticipated, tangible signs of victory so far, as well as the dramatic gains that have been achieved at the scientific level as a result of this unprecedented investment in basic science. I hope that at the end of it all, you will be able to view the score-card dispassionately and declare a winner.

  • Even though President Nixon, and subsequent administrations have continued to invest heavily in cancer research, the dedicated budget for the National Cancer Institute alone rocketing up to more than seven billion this year, the monies are not being spent as wisely as they could be. For example, the funding agencies tend to reward basic research being performed in Petri dishes and mouse models that bear little relevance for humans, 99% investigators using xenografts. Imagine the exceedingly contrived scenario of achieving a “cure” in a severely immune-compromised animal injected locally with human tumor cells and then treated with a strategy being tested. Is it a surprise when the results cannot be reproduced in humans? Basic cancer research may one day be successful at identifying the signaling pathways that determine malignant transformation, however, it will be a long time before the entire process of cancer initiation, clonal expansion, invasion, and metastases is understood, especially in the context of the highly complex, poorly understood micro-environment in which the seed-soil interaction is occurring. Using this approach, an effective therapy for cancer can only be developed essentially after we understand how life works. Can our cancer patients afford to wait that long? Isn’t the history of medicine replete with examples of cures obtained years, decades, and even centuries before the mechanism of action was fully understood for these cures? What about digitalis, aspirin, cinchona, vaccination?
  • There is an odd love-hate relationship that has developed between Academia and the Pharmaceutical industry. On the one hand, major research and development (R&D) efforts by industry, conducted under great secrecy, result in the identification of potentially useful novel agents which nonetheless must ultimately be tested in humans. Credible clinical trials in human subjects are conducted by academic oncologists. On the other hand, advances being made in the laboratories of academic researchers need the partnership of industry for commercial and widespread application. This forces the Industry and Academia to become reluctant bedfellows. Roughly 350 cancer drugs are in clinical trials now.
  • In order for a drug to show efficacy, FDA demands that it be tested first in animal models that are not relevant to humans. To make matters worse, when the drugs are approved for human trials, they can only be tested in terminally ill patients. Many agents that would be effective in earlier stages of the disease are therefore thrown out like the baby with the bathwater. Finally, the end point sought in most drug trials even in end stage patients, continues to be a significant clinical response. Very few, if any, surrogate markers are used to gauge the biologic effects of the drugs. The surrogate or bio-markers include proteins being produced by the abnormal genes, as well as processes and pathways that distinguish cancer cells from normal cells such as formation of new blood vessels or angiogenesis. If a drug does not produce the desired clinical end point, it is then likely to be abandoned completely, even though its biologic activity could be harnessed for more effective use in combination with other agents.
  • As the internet dotcom bubble burst in the 90s, the bi-technology industry was the big winner since some of the best minds in the country made lateral moves and began to invest their talents in this area. The striking change over the last decade in the pharmaceutical industry has been its ability to attract and retain high caliber academic scientists and clinical investigators. Even with this vital infusion, it takes 12-14 years and a prohibitive ~800 million dollars for a pharmaceutical company to get a new drug approved, most of the money having been raised from the private sector which is clamoring for a profit. Following the arduous R&D process and the tedious, time consuming and labor intensive animal studies, by the time a clinical trial is undertaken in human subjects, the stakes are already too high and companies may have to be struggling to demonstrate the tiniest statistical benefits over each other’s products.
  • The catch phrase today is “Trageted Therapies”; the concept that a convergence of science and advanced technologies will illuminate the cumulative molecular mechanisms that ultimately produce cancer, and this will lead to an objective drug design to pre-empt or reverse the cancer process. Except for the drug Gleevec developed against Chronic Myeloid Leukemia (CML), a rare type of leukemia in which single gene mutation underlies the pathology, all other Targeted therapies so far have met with modest successes. For example, the recently approved Erbitux and Avastin for cancer of the colon and rectum improved survival by 4.7 months when given in conjunction with chemotherapy. Even in the area of targeted therapies, the efforts are frequently scattered. Academia, Industry and Institutions such as the NCI, FDA, CDC, EPA, DOD etc are not coordinating their resources efficiently. For example, hundreds of researchers across the nation are performing gene expression and proteomic experiments, diluting the number of specific cancers examined for potential targets instead of developing organized collaborative studies.
  • Research on such topics as epidemiology, chemo-prevention, diet, obesity, life-styles, environment, and nutrition is woefully under-funded.

III. WHAT SHOULD BE DONE TO FIGHT THE WAR MORE EFFECTIVELY

“Stomach cancer has disappeared for reasons nobody knows and lung cancer has rocketed upward for reasons everyone knows,” says John Cairns, a microbiologist now retired from the Harvard School of Public Health. To win this war, some steps that need to be taken are rather apparent while others remain to be carefully debated and planned. For the vast majority of cases, no “cause” can be identified, but cancer is presently believed to be triggered by a combination of genetic predisposition and lifestyle factors such as diet and occupation. Consequently, chances of developing cancer can be significantly reduced by not smoking, adopting a healthier lifestyle, and proper nutrition. Focus is needed in improving methods for early detection, on treating precancerous conditions, (the dysplasias, metaplasias), and on understanding the reasons for susceptibility to the malignant process in individuals and families.

Where research is concerned, man must remain the measure of all things. Human tumors rather than mouse models should be studied directly. To harness rapidly evolving fields like nanotechnology, proteomics, immunology, and bioinformatics, and focus them on serving the cause of the cancer patient, we must insist on collaboration between government institutions (NCI, FDA, CDC, DOD etc), academia and industry. In the case of the Human Genome Project, collaboration was the key to the rapid mapping. The same concerted effort needs to be invested now in sequencing mutations in hundreds of freshly obtained human cancers of all types, a venture which has been proposed as the Cancer Genome Project. It is a well known fact that all those machines and robotics developed worldwide for sequencing the human genome are either sitting idle or being used for sequencing the genomes of microorganisms and fruit flies. They would serve a far better purpose by being employed in sequencing several hundred breast, lung, colon and prostate cancers to identify the most common mutations. Identification of specific mutations will lead to the discovery of seminal signaling pathways unique to organ specific malignant cells which can then serve as therapeutic targets. Given that nature is highly parsimonious, it is likely that some of these pathways would be redundant as was the case with Gleevec. This drug was developed specifically to inhibit the tyrosine kinase of the Abl gene, and has proved to be effective in producing remissions in >97% CML patients. However, it has now been discovered that patients with gastro-intestinal stromal tumors or GIST can also respond to this drug as the cells use the same tyrosine kinase blocked by Gleevec. More recently, thyroid papillary cancers, subsets of patients with other bone marrow disorders (for example those showing translocations between chromosomes 5 and 12) and even cases of as different a disease as pulmonary hypertension, have been found to respond to Gleevec. What this proves is that some key pathways are likely to be present in cancers or even diseases across organs, and their identification could deliver unexpected benefits.

Cancer is a multi-step process that involves initiation, expansion, invasion, angiogenesis and metastasis. Each stage of the disease may offer a variegated set of targets, thereby making the one drug, “magic bullet” approach only feasible in a handful of cancers where single mutations underlie the malignant process (as described above for CML and Gleevec). A critical lesson from developing successful therapy for AIDS is that three drugs targeting the same virus had to be used before effective control of its replication was achieved. Similarly, multiple targets must be attacked at the same time in the cancer cell. The “seed and soil” approach where drugs act on both the malignant cells and their microenvironment would be preferred over those targeting either in isolation. For example, a drug that blocks a key deregulated intracellular signaling pathway and checks the malignant cell’s perpetual proliferation can be combined with an anti-angiogenic drug which stops the formation of new blood vessels and arrests the invasion of tissues by the tumor. The objective choice of agents would require the practice of evidence-based medicine, and this is what the government institutions should be rewarding the investigators for. Many effective therapies directed against components of the seed and soil, are already available, but researchers are only allowed to use one investigational agent at a time, and that too in patients with advanced disease. This stilted and almost self-defeating approach needs to be abandoned. Patients who already have a diagnosis of cancer cannot afford to wait.

I am optimistic that in the next few years, given the power and sheer velocity of the evolving bio-technology, the very fundamentals of cancer research and treatment will have undergone cataclysmic changes. It may not be possible to cure cancer within the next decade, but, in the words of the NCI Director, Dr. Andrew von Eschenbach, it may very well be possible to “transform cancer into chronic, manageable diseases that patients live with – not die from”.