Teaser Appetizer: Economics of Death

No, it is not about the death by the deranged beast in us that wages war and genocide, nor is it about the economics of episodic mania of nature unleashing its fury, but of that death which takes us away quietly in our un-heroic old age. It is about the economics of snatching a few extra days from the clutches of death when frayed emotions bargain with the inevitable.

How furiously should we ‘rage against the dying of the light’?

Here is a true story: cast your decision.

An 86 years old man, afflicted with shakes and frequent ‘fall attacks’ of Parkinson’s disease, stumbles at home. His skull crashes against the travertine and spews blood. The paramedics whisk him away; the neurosurgeon valiantly operates on him and admits him to the ICU. He is unconscious but his lungs bellow with the help of a respirator. The doctor says that the skull has fractured; the brain is lacerated and delivers ‘no hope’. You are the family; what would you do now?

1. Do every thing possible to keep him alive.
2. Disconnect the life support.

Both the answers are right; that is the dilemma.

********

THE COST BURDEN

‘I want to die on schedule.’

Man lived up to the age of 47 years in 1900 in the USA and many years less in other less fortunate lands in Asia and Africa. Good nutrition, improved sanitation, mass immunization and a few antibiotics increased the life expectancy to 67 years by 1960. But since then, in the past 40 years, it has struggled up by only 7 more years. (See table: Life expectancy of males at birth in the USA: National Center For Health Statistics):

1900-1902 47.9
1909-1911 49.9
1919-1921 55.5
1929-1931 57.7
1939-1941 61.6
1949-1951 65.5
1959-1961 66.8
1969-1971 67.0
1979-1981 70.1
1989-1991 71.8
1997 73.6
1998 73.8
1999 73.9
2000 74.3
2001 74.4
2002 74.5
2003 74.8
2003 74.8

Expenditure on health in1960 in the US was $144 per capita which amounted to 5% of the GDP. By 2003 the corresponding figures had escalated to $5,635 and 15%. The gluttonous medical industrial complex gobbled up hundreds of billions of dollars without proportionate improvement in health. Ironically, it is not the old age but the process of dying that devours a substantial part of this swollen sum; the last year before death consumes between 26% and 50% of total lifetime health care expenditure.

Investigators at Rutgers University compared expenditure of terminal year with non-terminal year for people over 65 years. Between 1992 and 1996 the mean expenditure was $37,581 in the terminal year compared to $7,365 for the non-terminal year.

US Medicare spends 28% of its budget in the last year of the life and most of it in the last 30 days. Death in the hospital is costly. 4,692,623 persons died in the US hospitals in 2003. The hospitals received an average of $ 24,429 per person for terminal care for an average stay of 23.9 days. [Dartmouth Atlas of health care.]

Larger percentage of older people in the population strains the healthcare financing, as far greater numbers at an advanced age are likely to die in the following year. According to the Canadian health tables ‘the probability that a male aged 40 will die during the next year is 0.2%, while at age 70 it is 3.0%, and at age 90 it is 18.6%’.

But people who express their wishes about terminal care fare differently. In a study done between 1990 and 1992, persons who had directed in advance about the intensity of service they wanted near death, spent much less: $30,478 compared to $ 95,305 by those without an advance directive. [Archives of Internal Medicine, 1994.] Those with an expressed ‘do not resuscitate’ before admission to the hospital spend much less than those who order this during the course of their hospital stay.

Can we afford the ever-increasing cost of the terminally sick? Maybe, it needs a different perspective.

***********

THE REVENUE MODEL

‘Don’t agonize about prolonging life, just postpone my death.’

The flip side of cost is revenue. The cost to one business system shows up in the revenue column of another business. The transfer of this cash creates jobs in its transit and in this case, in an industry, which tends to the sick and attempts to keep the rest healthy.

In 2004, health care was the largest industry, had 545,000 establishments and employed 13.5 million people. About 19% of new jobs will be created by this industry up to 2014 — more than any other industry.

Hospitals comprise only 2% of the healthcare institutions but they employ 40% of all workers. It is calculated, that hospitals accounted for Medicare revenue of 114.6 billion dollars in 2003, for the care of the terminal 23 days of persons over 65 years.

Health expenditure in the US in 2005 was $1.7 trillion, which makes it probably the largest single industry in the world. One could make a reasonable argument that the economy generated by the health care industry is more desirable than many other industries like alcohol, tobacco and the other two big industries: war and religion.

Health care’s contribution to the GDP was 15.3% in 2005 and should increase in coming years. And why not!

************

THE AGONY OF ETHICS

‘Economics is the bastard child of ethics.’ –TS Elliot.

An isolated cost versus benefit matrix should not determine the end of life measures. The insatiable appetite for technology makes the choice between cost and ethics even more difficult for the family and the health care providers. In the absence of an advance directive, often, all the options in the care of the terminally sick seem ethically right. Sometimes, the quality of remaining life helps in deciding the course.

About 10% of all who die after age 65 are severely impaired and 14% are fully functional. Between these two ends of the spectrum are partially functional people. Disability increases with age. In a survey done in 1986, only 20% between the ages of 65 and 74 were completely functional and 3% were severely disabled. At age 85 about 22% were incapacitated and only 6% were functioning fully.

Most people will agree that prolonging life of everyone irrespective of the disability is the right choice, but standing by the bedside of the terminally sick person the questions are: is it worth it and at what emotional and financial cost? There are no wrong answers.

“Most Americans can’t afford a comfortable death. More than likely, their savings accounts won’t hold up after intense hospitalization. And, as the insurance system now works, benefits will cover ample surgeries and procedures, but once those limits are met there is nothing left for palliative care…”

“At least three barriers block the way for a more comfortable death… (1) The health-care system fails to offer an institutional structure to support appropriate choices for dying patients. (2) Insurance mechanisms fall short of providing adequate support beyond high-tech care. (3) American culture embraces high-tech medicine while harboring an overwhelming fear of painful death. Discussions of palliative care rarely enter into the picture.”— From the September 1996 Medical Ethics Advisor.

We can devise the cost controls, extol the virtuous revenue, debate the knotted ethics but which balm will soothe the emotions?

**********

The cremated remains of my father slid from my fingers. The slow breeze hugged the ashes gently and floated them away into the heaving Ganges. His last remains bobbed and crested the waves and then merged with the primordial waters from whence he had sprung as ‘life’ many eons ago.

I saw him disappear –forever. But a thought lingered: maybe we shouldn’t have pulled the plug.

Monday, January 1, 2007

A Case of the Mondays: the Year of Dashed Hopes

I presume that at the end of each year, pundits, writers, and bloggers gather to discuss the year’s political trends. Most of what they discuss is invariably pulled out of thin air, but I hope I’m basing my own analyses on enough evidence to escape that general description. It’s accurate to characterize 2004 as the year of liberal democratic hopes: the Orange Revolution in Ukraine, the new parliamentary elections in Georgia consolidating 2003’s Rose Revolution, the calls for democratic revolution in Iran. This continued into early 2005 with Lebanon and the scheduled elections in Palestine.

And then it all crashed. New Ukraine was plagued by corruption. The Tulip Revolution didn’t go anywhere. Frustration with the slow pace of reform in Iran catapulted Ahmadinejad to power instead of ushering in a new democratic system. Fatah looked weak on corruption, weak on Israel, and weak on public order, while Hamas looked like a fresh change.

In the Middle East, 2006 was the year of dashed hopes, even more so than 2005. Iraq was irrevocably wrecked long before 2006 started, but 2006 was the year the violence escalated. Most wars kill many more people than any subsequent occupations; in Iraq, there were more people killed in 2006 than in 2003. The Sunni-Shi’a rift had been there for fifteen years, but intensified over the course of last year, and spilled over to other countries in the region: Iran, Saudi Arabia, Lebanon. Throughout most of the year, there was only escalating violence and increasing legitimization of Muqtada Al-Sadr, but right at the end, the execution of Saddam was probably carried out by Al-Sadr’s followers, rather than by the government.

The single country in the region whose hopes were dashed the most was of course Lebanon. The Cedar Revolution was supposed to usher in a new age of democracy built along the same pillarized model that had worked in the Netherlands for about a century. Hezbollah was supposed to reform itself from a terrorist organization to a legitimate if fundamentalist political party. And the country was supposed to become independent of Syrian and Iranian influence. To a large extent due to Israel’s lack of knowledge of foreign policy responses that don’t involve military force, those hopes disintegrated in the summer of 2006.

In Palestine, Hamas won the parliamentary election, which Israel considered equivalent to a writ permitting the IDF to kidnap elected Palestinian officials at will. As had happened in Nicaragua in the early 1980s, the Hamas government found itself stripped of development aid, and became increasingly radicalized as a result. Israel responded the only way it is familiar with, i.e. with military force, and killed 655 Palestinian civilians in the Occupied Territories, up from 190 the previous year.

And in the US and Iran, two conservative Presidents with a vested interest in muzzling liberal democratic opposition escalated their saber-rattling game. In Iran, that meant crackdowns on opposition media, especially in the wake of Israel and Hezbollah’s war. Although toward the end of the year, reformists gained power in the election, real power in Iran lies in the hands of unelected Supreme Leader Khamenei, who is as opposed to democratic reforms as Ahmadinejad.

At the same time, 2006 was the year of recognition. In Iraq, the situation became so hopeless it became impossible to pretend everything was going smoothly. Right now the only developed country where the people support the occupation of Iraq is Israel, where indiscriminately killing Arab civilians is seen as a positive thing. The Iranian people did the best they could to weaken the regimes within the parameters of the law. Hamas’s failure to deliver on its promise to make things better led to deep disillusionment among the Palestinians, which did not express itself in switching support to even more radical organizations. And most positively, the Lebanese people, including plenty of Shi’as, came to see Hezbollah not as a populist organization that would liberate them from the bombs of Israel, but as a cynical militia that played with their lives for no good reason.

Elsewhere, there were no clear regional trends. However, the political events of 2006 in the United States might point to a national trend of increased liberalism. On many issues the trend is simply a continuation or culmination of events dating at least fifteen years back, but on some, especially economic and foreign policy ones, the shift was new. In 2002 and 2004, the American people voted for more war; in 2006 they voted for less. While they didn’t elect enough Senate Democrats to withdraw from Iraq, they did express utter disapproval of the country’s actions in Iraq. This trend originated in the Haditha massacre of 2005, and Bush’s approval rating crashed in 2005 rather than in 2006, but it was in 2006 that the general discontent with the direction of American politics was expressed in a decisive vote for a politically weak party over Bush’s party.

So after the hope of 2004 and early 2005, 2006 was not just the year when violence rebounded and democracy retreated in the Middle East, but also the year when public unrest with the status quo grew. This unrest did not manifest itself in any movement with real political power, and I don’t want to be too naively optimistic to predict that it will. I mentioned that the Iranians did everything within the parameters of the law to support democratic reforms; but Iran’s system is so hopelessly rigged that nothing within the parameters of the law can change anything. Still, indirect action typically sets the stage for direct action; Martin Luther King’s civil rights movement stood on the shoulders of decades of NAACP and ACLU litigation.

The cliché way to end this would be to look at the situation in Iran and to a lesser extent Lebanon and Palestine, and posit that the country is now at a crossroads. I don’t think it is; the Iranian people have had the infrastructure and social institutions to overthrow theocracy for a number of years now, and came closest to doing so in 2002, before the US invasion of Iraq. It may be that the Iranian people have grown so tired of the regime that even “We hate America and Israel more than our opponents” isn’t enough to hold Khamenei and Ahmadinejad afloat. Or it may be that Israel will decide to save the regime by launching military strikes against its nuclear weapons program. And it may be that after either of these scenarios, there will be a political reversal the next year modeled on a color/flower revolution or on a reaction against such a revolution. Hopes can be dashed, and dashed hopes can be rescued, as 2006 taught us.

Selected Minor Works: Where Movies Came From

Justin E. H. Smith

In The World Viewed, Stanley Cavell wryly comments that it was not until he reached adulthood that he learned “where movies come from.” As it happens, movies come from the same place I do: California. Now as an answer to the question of origins, this is hardly satisfying. “California,” as a one-word answer to anything, has the air of a joke about it, whereas we at least aim for earnestness. This is a problem that has vexed many who have left California and attempted to make sense of it at a distance. The turn-of-the-century Harvard philosopher Josiah Royce once declared of his home state that “there is no philosophy in California.” Yet the state’s generative power, and my attachment to it, have left me with the sense that something of philosophical interest is waiting to be said, by me if I’m lucky, if not in it, then at least about it and its exports.

My sense is that these two questions, the autobiographical and the film-historical, may be treated together. This is not because I was born into a Hollywood dynasty –far from it– but because throughout most of my life, memories were something shared, something public, something manufactured. By this I mean that, instead of memories, we had movies, and instead of conversation, we mimicked dialogue. I use the past tense here, as in the title (though there in acknowledgment also of a debt to Joan Didion), because it is already clear that movies will not be the dominant art form of the twenty-first century, and if we agree with Cavell that a movie is a sequence of automated world projections, then movies are no longer being made.

Gretagarboclarkgable

A contingent development in the history of technology left us with an art form thought by many to reveal something very significant about what we as humans are. Cavell chose to express this significance in the Heideggerian terms of film’s ‘world-disclosing power’ (did Heidegger ever even see a movie?). Already before 1920, Royce’s Harvard colleague Hugo Münsterberg had argued that the ‘photoplay’ serves as a powerful proof of Fichtean idealism: what need is there for Kant’s thing-in-itself if a ‘world’ can exist just as well projected on a screen as embodied in three dimensions?

I take it for granted that the world disclosed to us today is the same world to which human beings have had access for roughly the past hundred thousand years, that is, since we became anatomically, and thus we may presume cognitively, modern. For this reason, what interests me most about movies is the question: what is it that our experience of them replaced? We have only had them for a hundred and some odd years, not long enough for our brains to have evolved from some pre-cinematic condition into something that may be said to have an a priori grasp of what a movie is, in the same way that we now know that human brains come into the world with the concept of, for example, ‘animate being’. We are not naturally movie-viewing creatures, though it certainly feels natural, as though it were just what we’ve always done. What then is it that we’ve always done, of which movie-viewing is just the latest transformation? What is that more fundamental category of activity of which movie-viewing is a variety?

One well-known answer is that watching movies is an activity much like dreaming. This is evidenced by the numerous euphemisms we use for the motion picture industry. In his recent book, The Power of Movies: How Mind and Screen Interact, the analytic philosopher Colin McGinn explicitly maintains that the mind processes cinematic stories in a way that is similar to its processing of dreams. He even suggests that movies are ‘better’ than dreams to the extent that they are ‘dreams rendered into art’.

But what then are dreams? To begin with, dreams are a reminder that every story we come up with to account for who we are and how we got to be that way is utterly and laughably false. Everything I tell myself, every comforting phrase so useful in waking life, breaks down and becomes a lie. For eight hours a day, it is true that I have killed someone and feel infinite remorse, that my teeth have fallen out, that I am able to fly but ashamed to let anyone know, that the airplanes I am in make slow motion, 360-degree loops, that my hair is neck-length and won’t grow any longer. None of these things is true. Yet, some mornings, for a few seconds after awakening, I grasp that they are truer than true. And then they fade, and the ordinary sense of true and false settles back in.

The images that accompany these feelings –the feeling of shame at levitating, the feeling of being in a doomed airplane—are relatively unimportant. They are afterimages, congealed out of the feelings that make the dreams what they are. As Aristotle already understood, and explained in his short treatise On Dreams, “in every case an appearance presents itself, but what appears does not in every case seem real… [D]ifferent men are subject to illusions, each according to the different emotion present in him.” Perhaps because of this feature of dreams –that they are not about the things that are seen, but rather the things that are seen are accompaniments for feelings– dreams have always been interpreted symbolically. This has been the case whether the interpreter believes that dreams foretell the future, or in contrast that they help to make sense of how the past shaped the present. Psychoanalysis has brought us around, moreover, to the idea that retrodiction is no more simple a task than oneiromancy, and that indeed the two are not so different: once you unravel the deep truth of the distant past, still echoed in dreams even if our social identities have succeeded in masking it, then by that very insight, and by it alone, you become master of your own future.

It seems to me that we don’t have an adequate way of talking about dreams. The topic is highly tabooed, and anyone who recounts his dreams to others, save for those who are most intimate, is seen as flighty and mystical. Of course, the consequence of this taboo is not that dreams are not discussed, but only that they are discussed imprecisely. For the most part, we are able to explain what happened, but not what the point-of-view of the dreamer was. This is overlooked, I suspect, because it is taken for granted that the point-of-view of the dreamer is that of a movie viewer. What people generally offer when prompted to recount a dream is a sort of plot summary: this happened, then this, then this. Naturally, the plot never makes any sense at all, and so the summary leaves one with the impression that what we are dealing with is a particularly strange film.

Certainly, there is a connection between some films –especially the ‘weird’ ones– and dreams, but only because the filmmakers have consciously, and in my view always unsuccessfully, set about capturing the feeling of a dream. From Un chien andalou to Eraserhead, weird things happen indeed, but the spectator remains a spectator, outside of the world projected onto the screen, looking into it. We are made to believe that our dreams are ‘like’ movies, but lacking plots, and then whenever an ‘experimental’ filmmaker attempts to go without plot, as if on cue audiences and critics announce that the film is like a dream. Middle-brow, post-literate fare such as Darren Aronofsky’s tedious self-indulgences have further reduced the dreamlike effect supposedly conveyed by non-linear cinema to an echo of that adolescent ‘whoah’ some of us remember feeling at the Pink Floyd laser-light show down at the planetarium.

Dreams are not weird movies, even if we recognize the conventions of dreamlikeness in weird movies. Weird movies, for one thing, are watched. The dreamer, in contrast, could not be more in the world dreamt. It is the dreamer’s world. It is not a show.

However problematic the term, cinematic ‘realism’ shows us, moreover, that movies can exhibit different degrees of dreamlikeness, and thus surely that there is something wrong with the generalized movie-dream analogy. In dream sequences, we see bright colors and mist, and, as was explicitly noted by a dwarf in Living in Oblivion, we often see dwarves. When the dream sequence is over, the freaks disappear, the lighting returns to normal, and in some early color films, most notably The Wizard of Oz, we return to black-and-white, the cinematic signifier of ‘reality’. My dreams are neither like the dream sequences in movies, nor are they like the movies that contain the dream sequences. Neither Kansas nor Oz, nor limited to dwarves in the repertoire of curious sights they offer up.

A much more promising approach is to hold, with Cavell, that movies are mythological, that their characters are types rather than individuals, and that the way we experience them is probably much more like the way folk experience their tales. Movies are more like bedtime stories than dreams: like what we cognize right before going to sleep than the mash that is made of our waking cognitions after we fall asleep.

If anything on the screen resembles dreams, it is cartoons (and thus Cavell is right to insist that these are in need of a very different sort of analysis than automated world projections). Cartoons are for the most part animistic. It is difficult to imagine a dream sequence in a Warner Brothers cartoon, since there were to begin with no regular laws of nature that might be reversed, there was no reality that might be suspended. For most of the early history of cartoons, there were no humans, but only ‘animate’ beings, such as cats and mice, as well as trees, the sun, and clouds, often given a perfunctory face just to clue us into their ontological status.

The increasing cartoonishness of movies –both the increasing reliance on computer graphics, as well as the decreasing interest in anything resembling human beings depicted in anything resembling human situations (see, e.g., Pierce Brosnan-era James Bond for a particularly extreme example of the collapse of the film/cartoon boundary)—may be cause for concern. Mythology, and its engagement with recognizably human concerns about life and death, is, it would seem, quickly being replaced by sequences of pleasing colors and amusing sounds.

Teletubbieshp43212

I do not mean to come across as a fogey. Unlike Adorno with his jazz problem (which is inseparable from his California problem: the state that made him regret that the Enlightenment ever took place), I am a big fan of some of the animistic infantilism I have seen on digital screens recently. Shrek and the Teletubbies are fine entertainments. I am simply noting, already for a second time, that the era of movies is waning, and that nothing has stepped in, for the moment, to do what they once did.

A video-game designer recently told me that ‘gaming’ is just waiting for its own Cahiers du Cinéma, and that when these come along, and games are treated with adequate theoretical sophistication not by fans but by thinkers, then these will be in a position to move into the void left by film. I have no principled reasons to be saddened by this, but they will have to do a good deal more than I’ve seen them doing so far. Now I have not played a video game since the days when Atari jackets were sincerely, and not ironically, sought after. But I did see some Nintendo Wii consoles on display in a mall in California when I was home for the holidays this past week. The best argument for what the crowding mall urchins were doing with those machines is the same one, and the only one, that we have been able to come up with since Pong, and the one I certainly deployed when pleading with my own parents for another few minutes in front of the screen: it seems to do something for developing motor skills. This makes video games the descendants of sporting and hunting, while what movies moved in to replace were the narrative folk arts, such as the preliterate recitations that would later be recorded as Homer’s Odyssey. These are two very different pedigrees indeed, and it seems unlikely to me that the one might ever be the successor to the other.

Dreams are the processing of emotional experiences had in life, experiences of such things as hunting, or fighting, or love. Narrative arts, such as movies, are the communal processing, during waking life, of these same experiences. Movies are not like dreams, and video games are not like movies. And as for what experiences are, and why all the authentic ones seem to have already been had by the time we arrive at an age that enables us to reflect on them (seem all to have happened in California), I will leave that question to a better philosopher, and a less nostalgic one.

**

For an extensive archive of Justin Smith’s writing, please visit his archive at www.jehsmith.com.

Monday, December 25, 2006

Happy Newton’s Day!

Isaacnewton

Two years ago we at 3QD as well as Richard Dawkins independently decided to celebrate December 25th as Newton’s Day (it is Sir Isaac’s birthday). You can see my post from last year here. So here we are again. This year I will just provide two interesting things related to Newton, who some argue was the greatest mind of all time. For example, did you know that he hung out in bars and pubs in disguise, hoping to catch criminals? He did. Read this, from wikipedia:

As warden of the royal mint, Newton estimated that 20% of the coins taken in during The Great Recoinage were counterfeit. Counterfeiting was treason, punishable by death by drawing and quartering. Despite this, convictions of the most flagrant criminals could be extremely difficult to achieve; however, Newton proved to be equal to the task.

He gathered much of that evidence himself, disguised, while he hung out at bars and taverns. For all the barriers placed to prosecution, and separating the branches of government, English law still had ancient and formidable customs of authority. Newton was made a justice of the peace and between June 1698 and Christmas 1699 conducted some 200 cross-examinations of witnesses, informers and suspects. Newton won his convictions and in February 1699, he had ten prisoners waiting to be executed. He later ordered all records of his interrogations to be destroyed.

Newton’s greatest triumph as the king’s attorney was against William Chaloner. One of Chaloner’s schemes was to set up phony conspiracies of Catholics and then turn in the hapless conspirators whom he entrapped. Chaloner made himself rich enough to posture as a gentleman. Petitioning Parliament, Chaloner accused the Mint of providing tools to counterfeiters (a charge also made by others). He proposed that he be allowed to inspect the Mint’s processes in order to improve them. He petitioned Parliament to adopt his plans for a coinage that could not be counterfeited, while at the same time striking false coins. Newton was outraged, and went about the work to uncover anything about Chaloner. During his studies, he found that Chaloner was engaged in counterfeiting. He immediately put Chaloner on trial, but Mr Chaloner had friends in high places, and to Newton’s horror, Chaloner walked free. Newton put him on trial a second time with conclusive evidence. Chaloner was convicted of high treason and hanged, drawn and quartered on March 23, 1699 at Tyburn gallows.

More from Wikipedia here. And if you are in the mood for something much more substantive, I highly recommend watching this video of my mentor and friend, Professor Akeel Bilgrami, delivering the University Lecture at Columbia earlier this fall, entitled “Gandhi, Newton, and the Enlightenment.” I admit that the subject is only weakly related to Newton, but it is well worth watching on Newton’s Day nevertheless. The following description is excerpted from a Columbia University website:

Screenhunter_4_18Bilgrami devoted much of his talk to tracing the origins of “thick” rationality as well as the critiques it has received over the years. He identified the 17th century as the critical turning point, when scientific theorists such as Isaac Newton and Robert Boyle put forward the idea of matter and nature as “brute and inert”—as opposed to a classical notion of nature as “shot through with an inner source of dynamism, which is itself divine.”

Even at the time, there were many dissenters who accepted all the laws of Newtonian science but protested its underlying metaphysics, Bilgrami explained. They were anxious about the political alliances being formed between the commercial and mercantile interests and the metaphysical ideologues of the new science—anxieties echoed by the “radical enlightenment” as well as later by Gandhi.

According to Bilgrami, both Gandhi as well as these earlier thinkers argued that in abandoning our ancient, “spiritually flourishing” sense of nature, we also let go of the moral psychology that governs human beings’ engagement with the natural, “including the relations and engagement among ourselves as its inhabitants.”

Bilgrami expressed a certain sympathy for this dissenting view, noting that even if we moderns cannot accept the sacralized vision favored by these earlier thinkers, we should still seek alternative secular forms of enchantment in which the world is “suffused with value,” even if there is no divine source for this value. Such “an evaluatively enchanted world” would be susceptible not just to scientific study, Bilgrami argued, but would also demand an ethical engagement from us all.

See the video here.

And Merry Christmas!!!

Monday, December 18, 2006

Don’t Curb Your Enthusiasm

Australian poet and author Peter Nicholson writes 3QD‘s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

One of my favourite television shows in recent times has been Curb Your Enthusiasm. Larry David, Executive Producer of Seinfeld, plays ‘Larry David’ in a largely-Los Angeles milieu. Life seems to be either a series of excruciating personal humiliations or monumental social faux pas. The humour here is by turns uproarious, occasionally wistful and often very, very rude. I recommend it to anyone who wants to clear away the blues. Larry’s long-suffering ‘wife’ Cheryl has to put up with Larry as he tries to get along in a world that is always at a tangent to where Larry wants it to be. The lesson seems to be: curb your enthusiasm. Venture outside the expected and you will be unmercifully crushed by status quo expectations.

Which is just what you must not do in art if you want your work to have any chance of making it past the present moment. Don’t curb your enthusiasm. That is the main lesson. Your enthusiasm may be somewhat forbidding—Ibsen, unfashionable—Rachmaninov, or a variety of volupté—take your pick. The essential thing is the passion you bring to bear on your work, which naturally has its own tides of compulsion and lassitude.

Speaking of Rachmaninov, there was an outstanding concert given here in Sydney recently when Vladimir Ashkenazy took the Sydney Symphony Orchestra through an all-Rachmaninov program of the Three Russian Songs, the Piano Concerto No 1 (Alexsey Yemtsov) and The Bells (Cantillation, Steve Davislim, Merlyn Quaife, Jonathan Summers). Poor Rachmaninov, who had so much bad press dumped on him in his lifetime and who had to put up with continual sniping by 12-tone monomaniacs. But it has ended up being Rachmaninov who has triumphed. His music is heard and enjoyed across the planet for the reason that it is in touch with the human on a deep level. It does not deny our humanity. Here in Sydney Rachmaninov’s music surged through the Concert Hall with a grandeur and spirit that was electrifying. This effect did not appear out of the blue, but came through rehearsal, the careful harnessing of resources and, no doubt, long hours of practise by choir and soloists. Yemtsov, the pianist, had enthusiasm in spades. He didn’t behave as if he was being crucified at the piano as he performed, in the manner of some virtuosi. The music came first and last.

A few weeks earlier the Wiener Philharmoniker under the direction of Valery Gergiev performed in Australia for the first time. In advance, the programming didn’t look all that interesting. Tchaikowsky 5. Brahms 4. But how wrong could one be. The Brahms was a performance of a kind where you felt you were being forced to look at a terrifying piece of unearthed Greek statuary. What could account for this intensity? Perhaps the Beslan massacre was uppermost in Gergiev’s mind as he conducted, or maybe it was the orchestra’s close association with the composer—the Fourth Symphony played by the Vienna Philharmonic was the last concert music Brahms heard. At any rate, enthusiasm was the key. The players love making music together, and it shows. I guess that follows for Nine Inch Nails or U2 as well.

Enthusiasm that tears a passion to tatters is no use at all. You may feel something strongly, but that won’t get you through in art where you must apply technical skills, and subtlety, to the finished product. One skill which seems in short supply these days is the ability to see, on the whole, Mozart and Picasso notwithstanding, that less is more. Poetry especially seems to be experiencing the equivalent of bulimia as books pour forth. Just who is going to be reading all this stuff in the future? Very few people I should think, though I’d be happy to be proved wrong. For writers, enthusiasm means quiet persistence, letting the praise or blame fly by, going from A to B without getting diverted by the passing parade. And I think it means putting greatness of spirit in your way—it should be sitting on your shoulder.   

Caspar David Friedrich had enthusiasm, even as his work fell from popularity. Need anyone still point out the profound example of Vincent van Gogh. Cole Porter with his crushed legs but indomitable spirit had it. You feel it right through Gershwin, though a brain tumour killed the composer at far too young an age. There is so much creative beauty in the world and it is all filled with a kind of joyfulness at the fact of existence. It is there in philosophical enquiry and mathematical modelling. Surely Nietzsche had it, along with his migraines and bad digestion. And when the clerk in Berne came up with the Special Theory of Relativity, there too was a superabundance of the fröhliche Wissenschaft.

Well, you may end up in art having to do the equivalent of Larry David at the end of the second season of CYE when he is made, after another disastrous imbroglio, by court order, to carry a scarlet letter placard saying I STEAL FORKS FROM RESTAURANTS in front of The W Hotel as his erstwhile employees in the television industry, on their way to a network symposium, frostily avoid him. And Larry is probably thinking, along with Mahler—my time will come. However, whether it comes or not, in culture there can be no trade-offs with those who (don’t) know. That is clear. 

The worms will come out of the woodwork. People will be unkind, to put it mildly. Your work will be ignored or misrepresented. All that is to be expected. At all events, the lesson must go home. In art, in life, don’t curb your enthusiasm.

                                                                         *

               ICI REPOSE
        VINCENT van GOGH

Not here the slippage
Of motive, the bull market,
Dressage of cocktail and auction;
Neither the victory lap nor prize.
And yet, pushed out, vertiginous paint,
Cypress and flower spinning,
Nature’s cusp stubbed on canvas,
A bandaged head staring with love,
And that alone, at each malignant defeat.

Ours is a tepid dreaming
With not even the courage of beauty.
We wish our Age of Noise
To be an almanac footnoted,
Its mug celebrities
Caught in silverfish pages,
But still we won’t avoid
An empty room dimming our glamour.

Theories puffed, the boast
Of a thousand critical niceties,
Are shed in the fierce night,
One name cast
Near sulphurous soil,
Whose paintings keep,
For we who believe
Not in greatness, nor the strength of art,
In the space reserved for grace,
The sharktooth eye
Of a winnowing field
And yellow starlight shining.

Written 1989 Published 1997

 

Waiting for Tet

Michael Goldfarb, reviewing Rajiv Chandrasekaran’s new book Imperial Life in the Emerald City in the December 17 New York Times, remarks: “Regardless of how the war ends, Iraq is not Vietnam.”

Wanna bet? Engaged once more in a fantastic imperial over-reach, we are retracing the steps that led to the final defeat and withdrawal from Vietnam. Let us hope it is sooner than later, and no new Richard Nixon emerges to slow it down.

Once More Vietnam
Consider the parallels and what they tell us about American imperialism. The Vietnamese Tet offensive two months shy of 39 years ago destroyed the illusion of a possible American victory in Vietnam. President Johnson, realizing that the US was losing the war, sacked Secretary of Defense Robert McNamara and charged his successor, Clark Clifford, with making and “A to Z” assessment of the US war effort. All the wise men of the time were convened, ranging from famous retired generals like Omar Bradley, Maxwell Taylor and Matthew Ridgeway, future Secretary of Defense Cyrus Vance, Under-Secretary of the Treasury and future Wall Street financier George Ball, McGeorge Bundy, then head of the Ford Foundation, powerful Wall Streeter and former Treasury Secretary C. Douglas Dillon, and finally to Dean Acheson, Truman Secretary of State, principal architect of the Cold War, and the wisest of wise men. Perhaps only the figure of a President of Harvard was missing from the august cast.

Their recommendations changed the course of American involvement in the Vietnam War. American Commander William Westmoreland’s request for 250,000 more troops was rejected, and bombing North Vietnam for peace was declared a failure. The group decided that a military victory was unattainable, and that de-escalation and a negotiated peace were the only viable options.

Professor Richard Hunt sums up in Vietnam and America (edited by Marvin Gettleman, et.al., 1995) how the Tet Offensive by the North Vietnamese in January, 1968, set the policy shift in motion:
“This demonstration of the vulnerability of U.S. leadership was not lost on many sectors of the ruling class who now began to argue openly that the government had made a mistake and that policy in Vietnam and elsewhere had to be rebuilt around a recognition of the limitations of U.S. power. Never again would any administration be able to unite the entire ruling class behind a strategy of U.S. aggressive military victory in Vietnam.”

And Now Iraq
Once more the Secretary of Defense has been sacked, and the wise men have spoken. “The situation in Iraq is grave and deteriorating,” the Iraq Study Group reports. The new Secretary of Defense Gates admits “we are not winning.” Colin Powell says “we are losing,” though “we have not lost.” The Army is broken, he believes. The Army will break, says its head Peter Schoomaker, without additional troops to meet current combat levels, never mind additional troops to be used in what the White House is calling “a surge,” newspeak for escalation.

Negotiation, “Iraqization” of the ground war, and pressure on the Iraqi government to take up the slack: just the approach recommended by the Vietnam wise men almost 39 years ago. The new wise men, as James Baker and Lee Hamilton make clear in their letter of transmittal of the Iraq Study Group Report, back a “bipartisan approach” to retrieve “the unity of the American people in a time of political polarization,” so that the country can develop “a broad, sustained consensus.”

The wise men, stalwarts of the American ruling class, seek to salvage the American empire in the Middle East and create true believers once more of the American people. They offer a skimpy fig leaf to Brother Bush, though with a plan that even Michael Gordon, the New York Times military aficionado noted was a re-hash of several already shelved proposals for winning the war in Iraq, with his front page offered perhaps in penance by his publisher. So endangered is American hegemony in the Middle East, however, that the wise men put Israel on notice that territorial withdrawal and a two-state solution must be part of the plan for peace and American success in the region. No wonder Israeli Prime Minister let slip that he had nuclear weapons. He and his gang must be a bit scared right now.

But Bush resists, and despite public protests of many a retired general, something not seen since Douglas MacArthur’s insubordination forced Truman to fire him. Bush seems still in control of Iraq policy. He and his cabal, reports Robert Dreyfuss in the December 18 issue of the Nation, recently considered supporting a coup against Iraqi Prime Minister Nuri Kamal al-Malaki. Better a general, a strong man, to set things straight, they reasoned. But, alas, no Diem he. Apparently, al-Malaki had heard the news too. After initially snubbing Bush, he showed up in Amman on November 29 for his half-hearted anointing by Bush “as the right guy for Iraq.”

So Bush still has his war, but he has not yet had his Tet offensive. The wise men strive in vain: there can be no consensus on their terms or Bush’s for the resolution of the Iraq War. There is no basis for their consensus, as there is no basis for their success. Just like Ho Chi MInh thirty-nine years ago, no one now is going to give the United States an out.

Would you? After a war that has killed an estimated half a million Iraqis, that has triggered a civil war that has set Sunnis and Shiites upon each other?

We must grimly await anew a Tet. Only then, will Americans say to their rulers and their ruling class that enough is enough. Or perhaps they will say it to us.

Random Walks: Primal Instincts

ApocalyptoEveryone seems to be abuzz these days about Apocalypto, the latest directorial effort by Mel Gibson. Gibson’s suffered a bit of beating in the press of late for his drunken anti-semitic rants, but never let it be said that the man can’t tell a good story. Apocalypto is Braveheart with a Mayan twist, and just as much gratuitous blood and gore as The Passion of the Christ.

The hero is a young man named Jaguar Paw, whose village is attacked by a Maya war party. The captured villagers are herded back to the Maya city, where the women are sold as slaves and the men are painted blue and sacrificed atop a stone pyramid. Jaguar Paw is spared and escapes, and the rest of the film follows his journey through the rainforest — former Maya captors in hot pursuit — to be reunited with his wife and son.

Much has been made of the “factual inaccuracies,” historical anachronisms, and other liberties taken with specifics of Mayan culture. For instance, many of the details of the human sacrifice apparently were taken from Aztec rituals (eg, the blue paint, the cutting out of the heart, and the decapitation). The Maya didn’t use metal javelin blades, they used obsidian (volcanic glass) for their cutting tools and weapons; they were only just beginning to experiment with metal work when the Spanish conquistadors arrived in the early 16th century. And the use of ant mandibles to suture wounds is also understood to have been an Aztec practice. So Gibson and his team essentially conflated various aspects of Mesoamerican culture.

Personally, I don’t have a big problem with directors taking a few liberties when creating an obviously fictional feature film. Most of us can tell the difference between that and, say, a documentary. Nonetheless, it’s good that archaeologists and historians are speaking up about some of the mis-representations, because it helps increase awareness and broaden the general public’s knowledge of a truly magnificent ancient culture. The Maya were about a lot more than human sacrifice and stunning architectural ruins.

For starters, the Maya independently developed the concept of zero by 357 AD — long before the Europeans, who didn’t figure it out until the 12th century. They were also quite advanced in the realm of astronomy, despite being limited to observing the heavens with the naked eye. The most obvious error in Apocalypto is when Jaguar Paw is spared being sacrificed by a timely solar eclipse, which supposedly awed the Maya priests into freeing the remaining captives. Okay, the eclipse occurs just before a full moon, when in reality, 15 days would have to pass. I’m willing to grant Gibson some artistic license on that front. The real problem is that the Maya would have known all about the solar eclipse, and would hardly have found it awe-inspiring. Their calendar was sufficiently accurate to enable them to predict both solar and lunar eclipses far into the future, and their codices have survived as evidence of their expertise.

But it’s their architectural feats that people find most awe-inspiring, especially the giant, stepped pyramids, which Wikipedia informs me date back to the “Terminal Pre-classic period and beyond.” They’re not just visually stunning; there is growing evidence that many Maya structures also provide a sort of “Stone Age” sound track via unusual acoustical effects. Thanks to a rapidly emerging interdisciplinary field known as acoustical archaeology, more and more people who study various aspects of Maya culture are beginning to suspect that at least some of those sound effects were the result of deliberate design.

Among the strongest proponents of this hypothesis is David Lubman, an acoustical consultant based in Orange County, California, who has been visiting the sites of Mayan ruins for years, recording sound effects, and taking them back home for extensive scientific analysis. Back in 1999, I wrote about his work with the great pyramid at Chichen Itza, part of the Mayan Temple of Kukulkan, for Salon. The pyramid is famous, first, for a visually stunning, serpentine “shadow effect” that occurs during the spring and fall equinox; according to some Maya scholars, the temple seems to have been deliberately designed to align astronomically to achieve that spectacular effect.

The second effect is an acoustic one: clap your hands at the bottom of one of the massive staircases, and it will produce a piercing echo, that Lubman, for one, thinks resembles the call of the quetzal, a brightly colored exotic bird native to the region. He considers it the world’s first and oldest sound recording, making the Maya the earliest known inventors of the soundscape. Similar effects have been noted at the Maya pyramid at Tikal in Guatamala, and at the Pyramid of the Magicians in Uxmals, Mexico. In the past, such effects were ascribed to design defects, but Lubman thinks they may have been deliberate — implying that, far from being savage primitives, the Maya’s grasp of engineering and acoustical principles rivaled their astronomical accomplishments.

It’s been seven years since I wrote that article, and Lubman hasn’t been idle. He’s turned his attention to the Great Ball Court at Chichen Itza, a huge field measuring 545 feet long and 225 feet wide. It’s essentially a Stone Age sports arena, where a part-sport, part-ritualistic ball game (common to ancient Mesoamerican cultures) was played, known in Spanish as juego de pelota. It was a literal bloodsport, known as the sport of life and death. It was extremely violent, requiring players to wear heavy padding. Even so, they often suffered serious injuries, and occasionally players died on the field. There is evidence that, in the Aztec version, the losers would be sacrificed to the gods, and their skulls used as the core of a newly made rubber ball for the next game. This was considered to be a great honor, so they might have considered it “winning.” Guides at Chichen Itza insist that it was the winning team members who were sacrificed. (Personally, I can’t think of a better reason for throwing a game.)

The Great Ball Court has another interesting feature: it’s a sort of “whispering gallery,” in which a low-volume conversation at one end can be clearly heard at the other. Similar “whispering gallery” effects can be found in many European domed cathedrals — most notably St. Paul’s in London — but it’s the curved domes that create the amplification effect as sound waves bounce off the surfaces. The Great Ball Court has no vaulted ceiling, and even today, the source of its amplification is incompletely understood, although theories abound. Lubman believes that the parallel stone walls are constructed in such a way that they serve as a built-in waveguide to more efficiently “beam” sound waves into the temples at either end.

There is also a bizarre flutter echo, lasting a few seconds, that can be heard between two parallel walls of the playing field; you can listen to a sound sample here. This acoustical effect is the subject of Lubman’s most recent work, which he presented earlier this month at a meeting of the Acoustical Society of America in Hawaii. Invariably, in Western architecture, such flutter echoes arise from design defects, so for decades this effect at the Great Ball Court has been disregarded by archaeologists. Ever the maverick, Lubman believes that in the case of the Great Ball Court, such an echo might have been a deliberate design. Lubmangbc_photo

The flutter echo would have been heard every time a ball hit the wall of the playing field — and possibly even when it the hard surfaces of the protective gear worn by the players. There is an eerie resemblance to the sound of a rattlesnake about to strike, and many of the carvings in the stone surfaces at Chichen Itza feature rattlesnakes. Some modern Maya interpret the flutter echoes as the voices of their ancestors, according to Lubman.

In fact, weird sound effects seem to be par for the course at the sites of Maya ruins. Chichen Itza also has “musical phalluses”: a set of stones that produce melodic tones when tapped with a wooden mallet. And at Tulum on the Yucatan coast, guides have reported clear whistles when the wind direction and velocity are just right, which Lubman believes could have been a possible signal to warn of developing storms.

There are plenty of scholars who remain skeptical of Lubman’s theories, and intent is well-nigh impossible to conclusively prove in the absence of express written historical documentation stating that intent. Even Lubman admits his “evidence” for intentional design is a bit circumstantial.

His work is fascinating, nonetheless, and really — why couldn’t a society as advanced in math and astronomy and architecture as the Maya also have figured out how to create strange acoustical effects with their structures? We think of them as Stone Age primitives, and violence was undoubtedly a huge part of their culture. I certainly wouldn’t advocate a return to those traditions, but Lubman’s work offers a window into this lost culture that indicates the Maya were far more sophisticated and complex than the brute savages depicted in Gibson’s otherwise-entertaining film. Perhaps there’s an element of wishful thinking there, but unlike Apocalypto, there’s some solid scholarship behind Lubman’s theories. It isn’t outright fiction.

When not taking random walks at 3 Quarks Daily, Jennifer Ouellette writes about science and culture at her own blog, Cocktail Party Physics. Her latest book, The Physics of the Buffyverse, has just been published by Penguin.

Monday, December 11, 2006

monday musing: hurricane

51hurricane_katrina

Hurricanes are such powerful forces that we often anthropomorphize them, we think of them as being conscious beings. One sign of this is that we name them. We talk about where they ‘want’ to go and what their ‘intentions’ are. And perhaps nothing is more mysterious, tantalizing and intriguing than the ‘eye’ of the hurricane. If the hurricane were a conscious being, the seat of its consciousness would surely be within the calm center of the eye. Indeed, there is a long history of equating the ‘eye’ with the ‘I’. The eye is the thing through which you perceive in the act of looking, though you never see the eye itself as you do so. The ‘I’ is the unifying force through which experiences are held together as ‘my’ experiences, though you never get to experience the ‘I’ itself as you do so.

But, in fact, hurricanes are the very opposite of intentional beings. A hurricane is simply the outcome of various inputs. The wind is blowing at such and such velocity. The temperature of the ocean water is at such and such degrees. The atmospheric conditions are having this or that effect. Ultimately, like any other force of nature, hurricanes are absolutely indifferent to how they develop, where they go, and what effects they have. They play themselves out like an algorithm. Any given hurricane has more in common with a storm blowing across the heat blasted, empty and forlorn wastelands of Mercury than it does to any creature picking its way across a landscape fraught with opportunities for the making of decisions and the exercise of intentional actions. Hurricanes do not care, they simply are.

When a hurricane comes into close contact with a city full of human beings there occurs a confrontation between a world of meaning and intentionality on one side and the mute indifference of the laws of nature on the other. The hurricane makes its impact felt physically, in swaths of devastation that reduce the city back to its material elements, back to mere things devoid of context and framework. The hurricane treats the city like an aggregate of stuff, and in doing so, reveals the fact that, on one level, that is all a city ever really is, no matter how much that stuff may actually mean to the individuals who live with it.

4_new_orleans_polidori_051_marigny5417t

The photographs of New Orleans in the aftermath of Hurricane Katrina by Robert Polidori in the special exhibit at the Metropolitan Museum of Art are studies in the results of this impact between the indifference of nature and the intentional space of the city. They are incredibly powerful photographs. They show a city reduced to mere things. Perhaps most profoundly, they show the interior spaces of people’s homes as those homes have been instantly transformed into ruins. Bedrooms, living rooms and kitchens that less than two years ago were rich with the contents of human lives look like they are the remnants of a long dead civilization. They look a thousand years old. The effect is not unlike the work of the artist Gordon Matta-Clarke, who would, literally, cut through the urban landscape exposing the interiors of houses and other structures and creating what felt like open wounds within the space of the city. Smyth644_1

The amazing thing about Matta-Clarke’s work was the way that it instantly transformed the most intimate spaces into places that feel like ruins, archeological. In its swath of destruction, Hurricane Katrina achieved something similar within the urban landscape of New Orleans.

In my favorite photograph, a white automobile sits in front of a white house. At first glance, it isn’t immediately clear that anything is wrong. But further study shows that the entire area has been under water. Lines of sediment have formed on the exterior of the house showing the different levels of flooding over previous weeks. Those same lines are mirrored on the car, revealing, from the perspective of the photograph, layers of geographical strata that mark the progress of the flooding. The overall effect is to erase the significance of the particular objects in the photograph. 1_new_orleans_polidori_000_orleans2732r The house and the car aren’t really what they are anymore. They have become elements in a more primal geographical story that is about water and wind and dirt and mud.

In aggregate, these photographs tell a general story about the transience of human things in the face of cosmic indifference. And oddly, in doing so, they are profoundly beautiful. They are so beautiful that it is disconcerting. In viewing the photos, I began to find myself almost pleased that the hurricane had graced us with these images of human ruin. In one photo, several cars have been upended in the flooding and now lean at angles against a few houses on a block. It is as if they were placed there by Richard Serra. And they are beautiful that way. It is inherently pleasing to look at and to contemplate.

Perhaps this is the revenge of the mind against the meaninglessness of the hurricane’s work. The hurricane won. In the course of a few hours it reduced generations of human activity to so much detritus floating in the filthy waters that breached the levees. But in doing so it also revealed a truth, which is that the richness of intentional spaces always contain the seeds of collapse and decay. It is melancholy to reflect that every facet of the urban landscape is also a ruin in potential. But it is no less true for being so. Hurricane Katrina neither knew nor cared that it was beautiful. But Polidori’s photographs have revealed the truth content that, despite itself, the hurricane carried along in its wake. What can be glimpsed in those striking images is the beauty of eking out transient spaces of meaning within the background of the swirl, the decay, within the waiting arms of death and oblivion.

Teaser Appetizer: Not so Nobel

Nobel_prizeYou have been selected to the jury to award the Nobel Prize (NP) in medicine. One of the contenders for the prize is the multiple antiviral drug therapy for HIV-AIDS. Surely, you say, this therapy has prolonged the lives of millions of HIV patients who were otherwise doomed, which makes it a favorite in your mind. But then you consider that it does not guarantee cure; while it is a great innovation it is not a fundamental discovery. How do you decide? Let us look at the history of Nobel Prizes in physiology or medicine.

The Nobel Prize (NP) has propelled brilliant scholars into the stratosphere of fame; many scientists have flown high and long on the wings of a seminal discovery but a few have glided back to ground in a short time. Only rarely has there been an unceremonious crash.

Slide0028_image028One hundred and five years ago, on 10 Dec 1901, Emil Behring [photo on right] won the first Nobel Prize in medicine for his work on the serum therapy of diphtheria. Nobel has since then honored one hundred eighty five more scientists in physiology or medicine. The annual continuity of the Nobel Prize (NP) suffered interruption during the world wars, so the prize has been awarded only ninety eight times.

Not all Nobel are equal. That arbitrary quintessential American measure of everything – small, medium and large – could well describe the durability and the impact of the Nobel discoveries. Durability signifies longevity of the validity of the discovery before its improved replacement arrives and impact shows the breadth of humanity that it benefits.

Ninety-one such discoveries out of a total of ninety-eight “Large impact” discoveries have opened gates to new vistas and have changed our lives forever, without us being so aware. The list is impressive and includes normal biological functions, pathogenesis of disease, tools of investigation and therapeutics.

The honor of the “Triple-extra-large impact” and arguably the largest impact discovery belongs to deciphering the very code of life that lay curled up — smug and self-assured — for over 3.5 billion years. For unraveling the twists of DNA, Francis Crick, James Watson and Maurice Wilkins received the NP 1962.

Some “large” discoveries have helped us:

  • Quantify molecules, hitherto immeasurable (radio-immune assay: Yallow, 1977)
  • Pierce the crevices of body but without a knife (CAT: Cormack and Hounsfield 1979; MRI: Lauterbur and Mansfield 2003)
  • Indict the culprits (Tuberculosis: Koch 1905; Prions: Prusiner 1997)
  • Understand mundane functions (Olfactory system: Axel and Buck 2004; dioptrics of the eye: Gullstrand 1911)

The large Nobel has a long life. The very second NP was awarded to Ross in 1902 for the discovery of the pathogenesis of malaria and the lifecycle of the malarial parasite. His work is still valid more than a century later.

But there also have been discoveries with a shorter life span. Five such “Medium impact” discoveries have provided extraordinary windows of opportunity. They may not have been durable but they have ushered subsequent important discoveries:

  • 1903: Niels Finsen treated tuberculosis of the skin with concentrated sunlight and founded the Finsen Institute of Photo-therapy in Copenhagen in 1896. Antibiotics have replaced sunlight but this notion perhaps continues remotely with radiation treatment of cancer.
  • 1926: Johanes Fibiger induced first experimental cancer in rat stomachs (Spiroptera carcinoma) by feeding them cockroaches infected with a worm called Gongylonema neoplasticum. Subsequently coal tar application produced skin cancer in other animal experiments. His mentors Koch and Behring also won Nobel for other discoveries.
  • 1934:George Whipple, George Minot and William Murphy got the NP for their discoveries in treating pernicious anemia with liver extracts. Currently we treat pernicious anemia with vitamin B12.
  • 1939:Gerhard Domagk proved the antibacterial effects of prontosil rubrum (red dye – a derivative of sulfanilamide), which paved the way for the development of sulfonamide dugs. He proved the efficacy of prontosil in mice and rabbits infected with staphylococci and streptococci. It so happened that his daughter fell deathly sick with streptococcal infection and he administered one dose of prontosil with skepticism — and in desperation. She recovered completely. Later he conducted wider successful human trials.
  • 1948: Paul Muller discovered the efficacy of DDT as a poison against arthropods. DDT was the main weapon in many countries for the control of mosquitoes causing malaria three decades ago but it went into disrepute when suspicions mounted for its toxic effect on humans and wild life. But recently on 15 Sept 06, the WHO has unambiguously rehabilitated this insecticide by recommending indoor spraying of the walls and roofs of the houses to kill malaria laden mosquitoes. Data has confirmed its safety in both humans and animals.

The ‘Oscar’ for the story, however, goes to two “Small impact” NP discoveries that have been peepshows of transient excitement and probably did more harm than good. These two Nobel discoveries stand out as not so noble. A fortuitous meeting of three scientists in a neurology conference in London set the stage for the first tragic discovery. The scientists were Fulton, Moniz and Freeman.

Fulton, like other scientists before him, had demonstrated that frontal lobotomy calmed the Chimpanzees. He shared this observation with Moniz, a Portuguese doctor, who mulled over this experimental idea and argued that cutting the nerve fibers between the frontal cortex and the thalamus (frontal leucotomy) could benefit psychotic patients with incurable hallucinations and obsessive-compulsive ideas. He would insert an ice pick like instrument on each side of the brain and with a few sweeps damage part of the frontal cortex. Some patients became docile but many deteriorated.

In 1936 Freeman and his coworker refined the lobotomy procedure and named it “Freeman-Watts Standard Procedure.” The pair demonstrated the procedure in the USA and made it extremely popular. Thus started the lobotomy craze. But further serious observation revealed that lobotomy harmed two thirds of patients and barely benefited the rest. What Fulton had investigated in Monizanimals, Freeman popularized in humans. But it was Antonio Egas Moniz [photo on right] who received the honor of the NP in 1949.

Unfortunately, he also received a bullet in his back from one of his not-so-happy patients a few years later, which left him paraplegic for life. His physical immobility ironically mirrored the emotional paralysis of some frontal lobotomy patients.

If this “small impact” NP was a consequence of a chance meeting of scientists the second “small impact” NP discovery resulted from serendipity.

When Wagner-Jauregg, like other investigators before him, observed that some patients with neuro-syphilitic paresis, improved after a febrile illness like typhoid or erysipelas, he set out to induce experimental febrile illness in his patients with a series of toxins. In 1888 he infected several patients with injections of streptococci. Stung by criticism, in 1890 he switched to non-infectious tuberculin; then in 1902 he used sodium nulleinate, boiled milk and milk protein – all in an attempt to induce fever. A few years later he observed a soldier suffering from neurosyphilis had improved with concomitant malaria. So in 1917 he started infecting syphilitic paretic patients with malaria.

War had probably blunted his sensitivity and he juxtaposed his treatment against the insane cruelty of war. He observed, “ We were already in the third year of the war, and its emotional implications became more manifest from day to day. Against such a background, a therapeutic experiment could stir me little, in particular since its success could not be foreseen. What meant a few paralytics, would possibly be saved, in comparison to the thousands of able-bodies and capable men who often died on a single day as the result of the prolongation of the war.”

The success of the treatment silenced its critics except one member of the prize committee: Dr Gadelius, a Swedish psychiatrist objected to giving the Nobel Prize because he thought a physician who injected malaria into a patient with advanced syphilis was a ‘criminal.’ Notwithstanding this Screenhunter_3_19 dissent, Julius Wagner-Jauregg [photo on right] received the NP in 1927 for demonstrating therapeutic benefits of malaria in syphilitic dementia and paralysis. Many hailed this as a “therapeutic noble deed” for a hopeless condition.

The story of small impact NP exemplifies the pitfall of any discovery. While all Nobel Laureates shine brightly in the limelight, yet on some the lights dim before the fifteen minutes of fame expire. Some migrate into cache of history and others disappear into the recycle bin but none gets deleted.

Does this brief background help you in deciding if the antiviral cocktail therapy for HIV-AIDS deserves the prize? Well, you should also know that some NP winning therapeutic interventions belong to “medium or small impact” categories. You say, in that case an HIV vaccine – when available – will be more deserving.

But no vaccine has ever won the prize.

So you go ahead and vote. The Nobel jury does not have to be perfect; science, unlike religion, is fallible.

Sandlines: Exile and patriotism – Who will rebuild DR Congo?

Throughout the Congolese conflict (1996 – present), civilian populations have served as the primary target for diverse combatant groups: ethnic militias, so-called ‘popular defense forces’, rebel factions and the government army. As attacks on civilians continue, persistent insecurity and suffering have triggered a different kind of explosion—a mass exodus of Congolese citizens seeking safety and opportunity in Africa, Europe and North America. An extensive Congolese diaspora was born.

_39604041_dr_congo_map203

Under fire, often from their own national army, the poorest of the poor seek safety and refuge in neighboring countries on foot, without clothing or food. Their survival needs are met by the many humanitarian agencies working in the region. Those who can afford the voyage to Nairobi, Johannesburg, Brussels or Montreal are of the skilled and educated middle class, once central to Congo’s administrative and professional sectors. A reality for many African countries today, conflict-driven brain drain is often wrongly attributed to the ‘globalization dynamic’, and whose consequences are measured by their impact on host countries. On the contrary, the most devastating result is felt at home. The mass exodus of middle class, educated Africans fleeing violence at home into the foreign diaspora creates a crippling cultural and professional void, one with far-reaching consequences for the home country.

In the case of DR Congo, filling this void depends largely on the country’s success at rebuilding a functional state, one to which Congolese expatriates can confidently return. Now that the presidential election results have been announced and the defeated party, led by a former warlord, has promised to accept peacefully, the new government can begin the path of national reconstruction. First priority is to establish a secure environment in which reconstruction can begin, thereby attracting expatriate Congolese to return home.

_41140352_polling203bap

The costs of a thriving Congolese diaspora go deeper than the void left by an absent educated middle class, whose departure also sees the voice of civil society fall completely silent. Few are left to challenge the predations of a militarized and self-interested political class, atrocities to which all are witness but none dare condemn openly. Congolese human rights activists are few and far between, and independent journalism is sold to the highest bidder. Reporting on the conflict itself is left to foreign news agencies whose media outlets are inaccessible to local residents, the real victims of the war. As Congo’s wartime history has gravely illustrated, the greater the number of educated people leaving, the deeper the darkness and isolation enveloping the country. While no numerical figure exists, a near-total absence of skilled, literate labor force in the country suggests the enormity of the present vacuum.

Another component of Congolese society that has disappeared in the mass exodus is its patriotism, understood as commitment to one’s country and a willingness to sacrifice for its cohesion and progress. National pride is now articulated in divisive and xenophobic terms. Looking inward, this means a tense cleavage between the Lingala and Swahili-speaking demographic who supported the two presidential finalists, Jean-Pierre Bemba and Joseph Kabila. Looking outward, patriotism takes the form of suspicion of western powers, who are conveniently blamed for all Congo’s woes. There is no collective condemnation of the failures of Congo’s political elites, no popular mobilization to dismiss ineffective politicians and replace them with sincere leaders. With the educated class lost to the diaspora, this dynamic of oppression and submission is perpetuated in two ways: corrupt politicians go unchallenged and the illiterate masses comprising the voting electorate are more easily manipulated by the same leaders.

I have worked regularly in the Congo for the last 18 years, and am often asked what I think the country needs to return to normalcy. To Congolese ears, my response is contrarian but not incomprehensible: “Where are Congo’s patriots? Those who abandoned the country in search of a better life should come home, sacrifice, and rebuild. Stop waiting for others to do this in your place.” Embarrassed, a Congolese friend living in the US responded: “But it is so hard to feel patriotic today. What is there to invest in or be proud of? I live abroad because it offers the one thing, the most important thing that I can’t get at home: a decent place to raise and educate my children.”

You can’t argue with that. Or can you?

In the popular psyche, the perception that the political class is an untouchable elite group, inaccessible to ordinary Congolese and above the law, is wholly entrenched after forty years under President Mobutu Sese Seko and, in his wake, eight years of violent conflict. Such blind acceptance is extremely disempowering to the masses, who in so doing sacrifice all vestige of political agency. Blind faith in one’s leaders opens the door to impunity for the political class itself, who are thereby removed from all accountability to their political subjects. In Congo and elsewhere, the post-colonial era of African dictatorships has unfailingly applied this simple formula for success. And while many see the cult of personality as a sham—the ‘Big Man’ myth, Africa’s panem et circum—no alternative political models are within reach.

An extended period of oppression by a long cast of characters, not limited to the war but dating from the Mobutu era, each of whom claims to be ‘of the people and for the people’, has eroded much of the popular will to mobilize for a better present and future. The cynical patriotism manifested by political elites—cronyism and corruption instead of serving the collective interest and cultivating a culture of accountability—has left many Congolese with no other model to follow. At the local level, authorities regurgitate the same false rhetoric of serving the nation as they bribe local citizens and divert vital resources away from intended beneficiaries and into their own pockets.

02104097_400

But as the saying goes, hope dies last. For many Congolese, a solution to their problems will come not from their leaders or even themselves, sadly, but from the western countries where so many have fled in search of a better life. Paradoxically, western countries are seen both as responsible for Congo’s crisis and as its only legitimate savior. During the war’s most bitter years, many felt that only a ‘Marshall Plan’—one implemented and managed directly by the international community—could deliver Congo from its chaos and misery.

Although many pockets of conflict and insecurity remain, the presidential elections transpired without a return to all-out war, as many feared. The next six months will determine whether the elected government will sufficiently right its course to begin attracting the diaspora to return home. Should this fail, replacing the educated and professional Congolese diaspora will take an entire generation of imprinting today’s youth in the image of the departed. But Congo cannot afford to wait another generation for its renewal. The heart and mind of its collective professional and economic capacity—the diaspora itself—must return from their places of refuge abroad and begin rebuilding the country.

Edward Rackley posts frequently at his personal blog, Across the Divide: Analysis and Anecdote from Africa.

Monday, December 4, 2006

A Case of the Mondays: Islam is Western

I really wish the people in the United States, Canada, and Europe who complain that Muslims are destroying Western culture looked at earlier groups of immigrants. The same things that people say about Muslims—that they’re an alien culture, that they don’t respect democratic values, that they treat women badly—were also said about Jewish, Italian, and Polish immigrants to the US a hundred years ago. The things people say about Islamic countries were true about a significant fraction of the West as late as the 1970s.

Islam and Christianity are so similar that they are almost, but not quite, the same religion. They’re both monotheistic, with all the cultural implications this carries. They both have a progressive view of the world, in which good works and proselytization will create an increasingly better world. Their eschatologies are remarkably similar. Overall, Islam is hardly different from Protestant Christianity. It’s entirely by accident that right now Muslim regions are more conservative and anti-democratic than Christian regions. Abstractly, there is nothing that prevents what is commonly called the West from eventually expanding as far south as the Sahara desert and as far east as Iran or even Pakistan and considering Islam as one of its two main religions. Just like there used to be a clearly defined Catholic West and a Protestant West, it makes sense to talk of a Christian West and a Muslim West.

More concretely, it’s instructive to compare Muslims to Jews. When Jews started immigrating to the US from Eastern Europe en masse, they were significantly more conservative than Christians on most issues, including all of those that anti-Islamic Westerners consider now in their assessment of Islam. They were almost invariably ultra-Orthodox; secular European Jews typically accepted Zionism and emigrated to Israel or tried to assimilate into the surrounding mainstream culture. If the practices of ultra-Orthodox Jews in Israel today are any indication, these immigrants were insular, stayed in enclaves like Brooklyn Heights and Williamsburg, had birth rates that would put today’s Arabs to shame, and treated women with about the same level of respect as Mormon polygamist sects. As late as 1963, Betty Friedan considered Jewish-Americans and Italian-Americans as examples of groups that were more patriarchal than mainstream America in The Feminine Mystique.

That Jews are now the most reliably liberal ethnic and religious group in the United States should suggest that the people who rant about the Islamization of Europe have a disturbingly myopic view of history. Jews had few structural barriers to integration; American cultural policy has always been neutral, neither suppressing minority-religion civil society institutions the way France does or shoving them down people’s throats the way Israel does. Anti-Semitism ran rampant in the United States up until 1945, when people started feeling guilty about the Holocaust, but there were numerous institutions that Jews could turn to beside the synagogue. Still, the process took almost an entire century, and the integration of white Christian ethnic minorities, like Italians and Poles, took only slightly less. If a similar thing doesn’t happen to European Muslims, Europe only has its countries’ own cultural policies to blame.

In The Clash of Civilizations, Samuel Huntington defines Western civilization based on liberal democratic notions like democracy, human rights, and gender equality. Based on that, he proceeds to claim that the West consists only of the US, Europe, Australia, New Zealand, and the Protestant and Catholic areas of Europe. Other people who focus on the cultural differences between Christians and Muslims are less explicit, but they still seem to believe similar things, perhaps with slightly tweaked civilizational boundaries.

The problem with Huntington’s assessment is that it ignores the fact that it’s just a coincidence of the last fifteen years that what he defines as the West is more or less contiguous with the part of the world that consists of democracies with at least moderate levels of gender equality. Thinkers in Protestant countries—including France, which has been at odds with the Papacy for centuries and fought on the Protestant side in the Thirty Years’ War—developed liberalism at a time when Catholic countries were authoritarian backwaters. Contrary to Huntington’s claim, the Enlightenment didn’t begin in Catholic and Protestant Europe while skipping Orthodox Europe, Latin America, and the non-Christian world; it began in England and France, and spread from there to countries that in some cases had been conservative in culture and government for hundreds of years.

All this means that critics of Islam, such as Mark Steyn and Daniel Pipes, are letting prejudice overwhelm their sense of reason. If you look at the situation between 1990 and 2006, you’ll indeed see that Muslims tend to be more religious, more misogynist, and more anti-democratic than American and European Christians. So what? If you looked at the situation between 1910 and 1925, you’d see that the same comparison applies to Jews and Catholics versus Protestants. It would even work better because you wouldn’t have to contort yourself to explain why what you say are Western values are not found in Russia and most of Latin America; you’d need to explain why France should be grouped with Britain rather than with Spain, but that’s far easier. That period of time saw emerging democracies in Germany and Czechoslovakia, both of which were dominated by Protestants (Prussians and Czechs respectively), compared with Italy’s slip into fascism. Applying the same methodology that Christian and Jewish critics of Islam use, you’d conclude that Catholicism was a backward religion that threatened to take over the United States via immigration and high birth rates.

Of course, many people actually said that, not so much about Catholics as about Jews. For most of those, democratic values were just a front for anti-Semitism, because they were a good abbreviation for “Our culture.” American anti-Semites were likely to worship Hitler, even though his values were anything but what Americans consider American values. Western anti-Muslim writers seem to worship Putin’s strong-arm treatment of Muslims, even as he destroys the democratic institutions they all profess to want to protect.

What is more, if Western values are defined by democracy, women’s rights, and so on, then there is no such thing as the West, only more liberal people and less liberal people. Almost every country in the world has been democratic at one point; states usually abandon democracy only when it fails to work or when the military is strong enough to mount a coup, just like in inter-war Italy and Germany. People have been slower at adopting feminism, but given that Jews and Italians and Poles didn’t do anything to lessen women’s rights in the US, it’s safe to conclude that the people who promulgate fears that Muslims will pressure Europe to adopt Sharia laws are more interested in hating foreigners than in telling the truth.

One approach is to conclude that civilizations the way Huntington defines them don’t exist at all. Another is to say that they exist, but have nothing to do with liberal values. If the latter approach is correct, and Huntington’s basic framework of basing civilizational boundaries on religion has merit, then Islam is part of the West (indeed, the lack of a mosque hierarchy makes Islam more Western than countries where the Pope gets to dictate abortion law). That inclusion should help shatter myths of Western cultural supremacy, which are surprisingly prevalent among people who claim that what they like about the West is its pluralism. Unfortunately, like their anti-Semitic ideological ancestors, anti-Muslims did not come to be what they are now due to any examination of evidence, but due to some form of prejudice.

Teddy Roosevelt’s Ghost

Two years ago, I dressed up as Theodore Roosevelt for Halloween, and my friend Emily dressed as Cuba. Together, we were “the Spanish-American War”. We weren’t trying to honor the man, the country, or U.S. interventionism. Rather, we were trying to let a bruised and hard-to-defend moment of American history have a rare moment as a costume. Also, it let Emily buck the trend of “Sexy Pirate Halloween costumes.”

It was a terrific mistake. I thought my T.R. costume was clear enough: a fierce moustache, a “Big Stick” from the backyard, a pair of khakis, a second-hand hunting jacket, a cowboy hat. Voila! Colonel Teddy Roosevelt, ready for San Juan Hill. Emily’s costume was a little more of a problem — how does one dress as a country, let alone Cuba? — but we settled on a short black wig, a Spanish skirt, a fake parrot, and a bandolier. Just the sort of thing a hack costume company might actually sell as “Cuba”, if they were interested, which was exactly the point.

Incredibly, only one person on the streets of New York that night got it. What a surprise. But at the time, it was a disheartening lesson that our little obsession with the mash-up of American history was not only more than a little pretentious, it was also mostly unshared by anyone else. But a last minute chance encounter made the whole vnture somewhat worthwhile. We were shuffling home (my borrowed boots were far too small), when we passed a group of Latino guys hanging out outside an apartment building in the West Village. They took a look at us, and one said loudly, in Spanish and provoking great bellylaughs, “Look! Here comes a pirate and a [dergoatory Spanish word for a homosexual male]!” I should have winced, but instead I stifled a laugh. Was there any sharper irony than a costume of T.R., the self-(consciously-)made paragon of “manliness” and the chin-thrusting embodiment of American imperialism, being read as a [derogatory Spanish word for a homosexual male] instead?

That story came to mind when I read that on Friday a U.S. Postal Service mechanic pled guilty to haven stolen the revolver used by Roosevelt — then a colonel in the U.S. Cavalry — in Cuba during the Spanish-American war. In April of 1990, Anthony Joseph Tulino apparently visited Sagamore Hill, Roosevelt’s Long Island Home, and stole the .38-caliber Colt Roosevelt used during the battle of San Juan Hill. It was no ordinary revolver: it had been salvaged from the wreckage of the U.S.S. Maine after the battleship exploded in Havana in 1898 — providing the pretext for war — and with it Roosevelt apparently shot a Spanish solider during the Rough Riders’ most famous charge. Tulino kept it wrapped in a sweatshirt in his closet until a friend tipped off the police. He was prosecuted under the American Antiquities Act of 1906, signed into law by Roosevelt himself, and his guilty pleas ended what a U.S. attorney called a “16-year-old mystery,” returning a “treasured piece of American history…to the public.”

Yes, I wondered, but just how “treasured” a piece of U.S. history is it really? (Monetarily, the revolver’s valued at $500,000.) Just how “treasured” is any piece of historical memorabilia owned by a president, when compared to what a Marilyn Monroe jacket or DiMaggio jersey might fetch? I think pieces like pistol are important, but I also dressed up as its reckless owner for Halloween one year. How many Americans know–or would care– why T.R. and his ghost have been haunting recent American culture and policy? President Bush thinks he knows — he read Edmund Morris’s “Theodore Rex” over the first holidays after the September 11th attacks, and in 2003 I did a doubletake when i saw a NY Times picture of him advocating intervention in Iraq with a painting of Roosevelt in the background. Hardly a coincidence.

But despite Roosevelt’s mark on anti-trust, health and environmental law, and the way he ushered in the “American Century” by asserting America’s exceptionalism and duty to intervene abroad, of the four presidents in the Mt. Rushmore club (all chosen, incidentally, for their role in protecting the republic and expanding its territories), he’s the least likely to be recognized by name. (This might be due to little more than T.R.’s lack of a dollar-bill home. Maybe Sean Comb’s great-grandson will one day say it’s all about the Roosevelts, instead, making this column even more irrelevant.)

American ignorance of its non-Washington-Jefferson-Lincoln presidential past seems to be a core joke in another interesting moemnt for T.R.’s ghost this month:

“Night at the Museum,” a Ben Stiller comedy to be released on December 22nd. In the most recent preview, we see Stiller — an applicant for a position as a security guard at New York’s Museum of Natural History — looking up at a posed manniken of Rough Rider-era T.R. on horseback.

“Ahh, Teddy Roosevelt,” he says to a museum employee. “He was our fourth president, right?”

“Twenty-sixth,” she says right back.

“Twenty-sixth,” Stiller notes.

It’s an easy joke, apt for almost any historical figure, but it’s brought to life by what seems to be the movie’s central conceit: that when the sun goes down, all the exhibits in the museum come to life. The Wild West dioramas, the T-Rex skeleton, and most importantly, the Roosevelt mannikan, played by none other than Robin Williams. The first time I saw the preview I flinched. Robin Willliams? But upon reflection you realize that casting one of America’s most manic comedians as one of America’s wildest presidents was a perfect choice, and said a lot about T.R.’s legacy. There’s no other American president whose character (what we know of it, at least) can hold its own, not as the straight-man (see Abe Lincoln in “Bill & Ted’s Excellent Adventure” or Nixon in “Dick”), but as a source of laughs in its own right.

Monday Musing: Aptitude Schmaptitude!

Like most people, I have no special gift for math. This doesn’t mean, however, that I am mathematically illiterate, or innumerate, to use the term popularized by John Allen Paulos. On the contrary, I know high school level math very well, and am fairly competent at some types of more advanced math. I do have a college degree in engineering, after all. (There is no contradiction in this–pretty much anyone can be good at high school math.) While the state of mathematical incompetence in this country has been much lamented, most famously in Paulos’s brilliant 1988 book Innumeracy, it is still tacitly accepted. Around the time when Paulos was writing that book, I was an undergraduate in the G.W.C. Whiting School of Engineering at Johns Hopkins University, and I soon noticed that to get help with mathematics, one generally had to consult with Indian, or Korean, or Chinese graduate students. (The best looking women happened to be in Art History though, so I very quickly developed a deep fascination for Caravaggio!) Some of the engineering departments (like mechanical engineering) did not have a single American graduate student, and since that time things have only grown worse, with much of the most important technological and scientific work in this country being performed by immigrants. (About a quarter of the tech startups in Silicon Valley are owned by Indians and Pakistanis alone.)

Being incompetent in math has become not only acceptable in this widely innumerate culture, it has almost become a matter of pride. No one goes around showing off that he is illiterate, or has no athletic ability, but declarations of innumeracy are constantly made without any embarrassment or shame. For example, on a small essay that I wrote here at 3QD about Stevinus’s beautiful proof of the law of inclined planes, my extremely intelligent and accomplished friend, and frequent 3QD contributor, Josh (now teaching and studying writing at Stanford) left an appreciative comment, while adding, “I couldn’t math my way out of a paper bag.” (Sorry to pick on you, Josh, the example just came handily to mind!) Confessing confusion about numbers is taken to display not only an endearing honesty in self-regard on the part of the confessor, but is also frequently taken to hint at a fineness of sensibility and high development in other areas of mental life. Alas, (Josh notwithstanding) there is no evidence of any such compensatory accomplishment in those who are innumerate. Not knowing high school level math is not easily excusable. But reader, if you are innumerate, it may not be your fault and I will not scold you. In fact, I’m going to try and pin the blame on American culture.

The way I see it, there was a one-two cultural punch which has knocked out numeracy in this country: first, there was a devaluing of mathematical competence in and by pop-culture; second, justification was provided for not learning mathematics to those already disinclined to do so by the devaluation. That’s it. The rest of this column is an attempt to flesh this out a little bit.

Just like learning to read (or for that matter, learning to play the piano) mathematics is something that it takes years to learn well and develop a good feel for. Reading, writing, playing the piano, and doing math are highly unnatural activities (unlike speaking, say) which we are not naturally evolved to do. Instead, we take abilities we have evolved for other purposes and subvert them because it is so useful to learn these things. And the price we must pay is that they are not always a great joy to do.Legaemc2l Just as one must learn one’s ABCs or practice one’s scales, one must also memorize one’s times tables, and I cannot think of a way to make that particularly interesting. It just has to be done. In fact, young students have to be disciplined into learning these things. But before anything else, it must be made clear that while learning math requires no special abilities, it is different than learning some other things in one crucial way: the study of math is (at least up to the high school level) very hierarchical and cumulative. While one may suddenly do very well in a European history course in high school while having paid no attention to any history in junior high, it is not possible to do well in Algebra in high school without having learned the math one was presented with in junior high. I sometimes tutor students for graduate admissions tests like the GRE or GMAT, and the first time I meet with them they often show me algebraic word problems they got wrong in a practice test. I ask how their junior high math is, and no one ever admits that they can’t do 7th or 8th grade math. Then I ask them to subtract one number from another for me, using a pen and a piece of paper I hand them: say -2and7/8ths minus 1and3/17ths. You’d be surprised how many of them are tripped up and make a mistake in a simple subtraction that any 8th grader should be able to do. The problem is they really cannot do ANY algebra until they are consistently and confidently competent in such simple tasks as adding, subtracting, multiplying and dividing numbers, and yes, this includes fractions, decimals, and negative numbers, but even these college graduates generally are not.

When I was a young child in Karachi, I liked reading Archie comic books, the hero of which is a bumbling, freckled, red-headed student at Riverdale high. He and his slightly evil schoolmate Reggie have a rivalry over class-fellows Betty and Veronica, who in turn are rivals for their attention. A slew of hackneyed characters rounds out the cast of this teenage-hormone-drenched-yet-wholesome comic book sit-com, including the glutton Jughead, the jock Moose, and others, but one of the least attractive characters serves as the pop-cultural stereotype of the math prodigy: Dilton Doily. Ridiculously and alliteratively named for a small ornamental mat, poor Dilton is smart but must pay the price. He is a small, unattractive, unathletic and insignificant nerd, complete with coke-bottle glasses and a pocket protector. No one in his right mind would or could look up to Dilton as a role model. Rather, he almost seems to be there as a warning of what might happen to one if one doesn’t watch out and avoid math. The rather stupid everyman Archie is, of course, glorified as an ideal and it is he who usually gets the girl. This is just one of a million such stereotypes in movies, TV shows, books, cartoons, and a zillion other things in which being mathematically literate is equated to basically being, at best, impotent and insignificant and, at worst, a sideshow freak. This is in part because geniuses in math, like in everything else, are sometimes eccentric, and in crudely contemptuous caricature, this eccentricity is easily exaggerated into freakishness. (In fact, I think that Stephen Hawking captures the popular imagination precisely because with his computer-generated voice and his sadly twisted pose in his wheelchair, he looks freakish to people and this so conveniently fits in with the popular prejudice about mathematical genius. It makes people feel good about being innumerate if being numerate is going to make one into a physical Stephen Hawking. The public even exaggerates his mathematical and scientific ability in a twisted sort of sympathy: in a poll of professional physicists, Hawking did not even make the twenty top living physicists, though popular polls would probably place him at number one; and probably number two, after Einstein, of all physicists, living or dead.) I could really go on forever providing examples of cultural hostility to mathematical literacy (and an argument could even be made that this is part of an overall anti-intellectual trend in America in the last few decades) but I am not interested in doing that here. My point is that it ain’t cool to be good at math.

But here’s the devastating second part of the one-two punch combination: if you haven’t learned your math, it’s because you don’t have an aptitude for it. (And for the reasons given in the previous paragraph, you might as well thank your lucky stars for that!) Through a complex series of events, I came to the United States as an 11 year-old boy to live with my brother in Buffalo for two years before returning to Pakistan for high school. I attended 7th and 8th grades at a suburban public school, and I loved it. To this day, I remember many of my teachers with immense gratitude and fondness: Mr. Shiloh, Social Science (“Washington, Adams, Jefferson, Madison, Monroe, John Quincy Adams… ” Yes, I can still recite all the presidents, Mr. Shiloh); Mr. Schwartz, Science; Mr. Coin, Mathematics; Ms. Muller, on whom I had the biggest schoolboy crush, German; etc. But one bad thing did happen to me: I was given something called a differential aptitude test (DAT) soon after my arrival, and the results were explained to me by my homeroom teacher: apparently, while I was supposedly gifted in verbal skills and artistic abilities, I was not much good at math or music. I took this to heart, and stopped paying much attention to mathematics. What was the point, if I just didn’t have the requisite ability to get it? It took my father’s devoted and prolonged drilling in mathematics a few years later, back in Pakistan, to undo the damage of that test, and I eventually got 99 out of 100 marks on my boards exam there.

I am by no means alone in this experience, and I believe that these tests and the whole idea that some children are better at some things and others at other things, and that they should be told this very early, is a stupendously dangerous one. What purpose can it serve, other than to encourage kids to give up on subjects that they may not have previously done well in for a thousand different completely contingent reasons? They will naturally already be trying harder at things they are good at, so they don’t need more encouragement there. This idea, that some people are good at some things, and others at other things, is fine if it is a matter of catering to children’s self-esteem when they are selecting a sport to play, for example. One person can be happy playing football, while another smaller person might become good at Badminton, or whatever, letting everyone believe that they have some special ability. After all, most of them will not grow up to be professional sportsmen or women. But when it is about something as fundamental and basic to future understanding of the world that they live in as mathematics is, it is hugely destructive. I firmly believe that anyone normal can be taught to master the mathematics of high school, and that it is all that is needed to produce a profoundly more numerate society, but it is near-impossible to overcome the “I’m just no good at math” barrier. Why are people even allowed, much less encouraged, to believe this about themselves? For those students who are geniuses, as well as those that are truly handicapped in some particular mental skill, these tests are not needed. That can be tested for in other ways. It is the huge majority of kids falling under the middle of the bell curve that tests like the DATs are so damaging to, and this, I think, is the real root of innumeracy in this country.

And then there are those who feel that it is no great loss to be innumerate. In that case, I’m sorry, but you don’t know what you are missing. Some of the most profoundly beautiful ideas produced in the last few thousand years are beyond you, as is the serious study of about 80% of what is taught in modern universities. Even the social sciences cannot exist without math anymore, and you cannot have any deep sense of political and economic issues if you are completely innumerate.

Let me summarize: math emphatically does not require any special ability, but it does require a lot of discipline, and if you fall behind, because of its cumulative nature, you will find yourself in a cycle of failure to master whatever you are presented with next. If you try to make the argument that math is something that only a portion of the population have the congenital ability to master, even at the high school level, then you must also make the argument that this mysterious ability, unlike any other mental ability that we know of, is also sharply unevenly spread across various countries. Japanese children have much more of it than American ones, for example, because Japanese high school students regularly trounce American students at the same level in math tests. You will also have to explain how Japanese children who have been living in America for a couple of generations lose that congenital ability. No, I’m afraid that will not do.

This essay is dedicated to my friend and greatest anti-innumeracy warrior, John Allen Paulos, whose book Innumeracy I mentioned above and recommend highly. Click it to buy it, or click here to buy his other books.

My previous Monday Musings can be seen here.

POST SCRIPT: John Allen Paulos has sent the following comment by email:

Paulos_5A nice story and some good insights, Abbas, and thanks for the kind words.

I agree that to an extent mathematics is a hierarchical subject and that a certain amount of drill is absolutely necessary to do well in the elementary portions of it. Nevertheless, it’s important to realize that considerable understanding and appreciation of many important ideas can be obtained via puzzles, everyday vignettes, expository articles, and sketches of applications.

A loose analogy comes to mind: If all one ever did in English class during elementary, middle, and high school was diagram sentences, or all one ever did in music class during those same years was practice scales, it wouldn’t be very surprising if one lacked interest in or appreciation for literature or song. Given suitable allowance for hyperbole, however, this is what often passes for early math preparation. The analogue of literature and song is not provided to mathematics students in their early studies, so there seems little rationale for developing the necessary mechanical skills needed.

A marginally relevant anecdote: I gave a lecture once to a very large group of students at West Point. Whether because of their military interests or their personal psychology, some were quite interested in the sequence or hierarchy of mathematical subjects. During the question and answer session after my lecture, I was told that the proper order of these subjects was arithmetic, algebra, geometry, trigonometry, calculus, differential equations, and advanced calculus and then was asked what comes after advanced calculus. The students were nonplussed at my answer of “serious gum disease.”

Monday, November 27, 2006

The Future of Science is Open, Part 2: Open Science

In Part 1 of this essay, I gave an outline of the scholarly publishing practice/philosophy known as Open Access; here I want to examine ways in which the central concept of OA, the “open” part, is being expanded to encompass all of science.

Terms
Though I am adopting the term “Open Science”, there are an number of similar and related terms and no clear overriding consensus as to which should prevail.  This year’s iCommons Summit saw the conception and initiation of the Rio Framework for Open Science.   Hosted on the iCommons wiki, the Framework is presently an outline consisting mainly of a useful collection of links and does not offer a formal definition.  In a 2003 essay, Stephen Maurer noted that:

Open science is variously defined, but tends to connote (a) full, frank, and timely publication of results, (b) absence of intellectual property restrictions, and (c) radically increased pre- and post-publication transparency of data, activities, and deliberations within research groups.

Jamais Cascio and WorldChanging have been talking about open source science, making a direct analogy to open source software, for some timeChemists Without Borders follow Cascio’s definition in their position statement:

Research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process.

Richard Jefferson, founder and CEO of CAMBIA, uses the term BiOS (either “Biological Innovation for Open Society” or “Biological Open Source”), and the Intentional Biology group at The Molecular Biosciences Institute talks about Open Source Biology.   Peter Murray-Rust has recently put together a Wikipedia page on Open Data; he writes:

Open Data is a philosophy and practice requiring that certain data are freely available to everyone, without restrictions from copyright, patents or other mechanisms of control.

Though Science Commons, which grew out of Creative Commons, doesn’t use the term “Open Data”, they have a “data project” and the concept is clearly central to their efforts.  Best and most open of all (in my opinion), Jean-Claude Bradley has coined the term Open Notebook Science, by which he means:

…there [exists] a URL to a laboratory notebook (like this) that is freely available and indexed on common search engines. It does not necessarily have to look like a paper notebook but it is essential that all of the information available to the researchers to make their conclusions is equally available to the rest of the world. Basically, no insider information.


Conditions
For what I am calling Open Science to work, there are (I think) at least two further requirements: open standards, and open licensing.

In his introduction to the chemistry-focused Blue Obelisk (group? movement?), Peter Murray-Rust refers to Open Standards as “visible community mechanisms which act as agreed protocols for communicating information”.   What he is talking about is metadata and a semantic web for science.  To see this idea in action, consider the following citation:

Hooker CW, Harrich D.  The first strand transfer reaction of HIV-1 reverse transcription is more efficient in infected cells than in cell-free natural endogenous reverse transcription reactions.  Journal of Clinical Virology vol 26 pp.229-38 (2003)

You can read that, but a computer cannot do anything really useful with the text string as given: it has no idea which part of the string means me and which means Dave, where the title begins and ends, which numbers are page numbers and which are a date, and so on.  Now remember that PubMed, the database from which I got it, contains millions of such citations (and abstracts, and links between papers that cite each other, and so on).  Stored as text strings, they would be impossibly clumsy, but with the addition of a little simple metadata:

Author/s: Hooker CW, Harrich D.
Title: The first strand transfer reaction of HIV-1 reverse transcription is more efficient in infected cells than in cell-free natural endogenous reverse transcription reactions.
Journal: Journal of Clinical Virology
Volume: 26
Pages: 229-238
Year: 2003

the citation is broken down into meaningful fields, each of which can be searched or otherwise manipulated separately.  The computer can now treat each string after “Author/s:” as a series of substrings (author names) separated by commas and ended with a period, the numbers after “Pages:” as a numerical range, and so on and on — which means you can ask the database useful questions, like “show me all the papers written by Hooker, CW between the years 2000 and 2006 and published in J Virol”.  There you have (a very simple example of) the two pillars of a semantic web: metadata and standards.  Examples abound: the Proteomics Standards Initiative, MIAPE, MIAME, Flow Cytometry Standards, SBML, CML, another CML, the Open Microscopy Environment and dozens of others.  Metadata and associated standards are going to be increasingly necessary to scientific communication and analysis as more and more of it takes place online and as datasets grow ever larger and more complex.  Science commons makes the point using the tumor suppressor TP53:

There are 39,136 papers in PubMed on P53. There are almost 9,000 gene sequences […] 3,800 protein sequences [and] 68,000 data sets available. This is just too much for any one human brain to comprehend.

Quite apart from lack of brainspace, there are answers in those datasets to questions that their creators never thought to ask.  In the same way that Open Access accelerates the research cycle and facilitates collaboration, so too does Open Data — and Open Standards is the infrastructure that makes it possible.

In a similar vein, Open Licensing also provides a kind of infrastructure — in this case, for dealing with intellectual property issues.  It’s fine to simply put your product on the web and let the world do as it will, but many people prefer (or, depending on where they work, are legally required) to retain some control over what others do with their work.  In particular, if you are concerned with openness you may want to ensure that the original and all derivative works remain part of the commons (e.g. copyleft rather than copyright). That means reserving at least some rights, which is where licensing comes in. 

As with Open Access, the original model comes from software licenses.  The Free Software Foundation publishes three licenses designed to provide and protect end-user freedoms and maintains a list of other software licenses classified according to compatibility with FSF licenses.  The Open Source Initiative also maintains a list of approved licenses which meet their (slightly less restrictive) standards for Open Source.  If you are looking for a publishing license (for audio, video, images, text and/or software), Creative Commons is the place to go: they offer six main licenses which provide varying degrees of freedom to end-users, a think-before-you-license guide and a handy tool for choosing which license suits you best.  They also offer a number of more specialized licenses and the FSF GPL and LGPL software licenses.  Every CC license is provided in three formats: legal code that will stand up in court, a plain-language summary and a machine-readable version (built-in Open Standards!) that CC-savvy search engines can use to filter results by CC end-user freedoms.  As with the copyleft protections in the GPL, CC offers “share-alike” licenses that maintain end-user freedoms throughout derivative works.  The example that impresses me most strongly with the power of CC licenses is that Public Library of Science journals, collectively the flagship of Open Access publishing, are all released under a CC attribution license.  If you find yourself dealing with someone else’s license — for instance, a publishing company — and you want to provide Open Access, you can use the SPARC author addendum: simply attach a completed copy of the addendum to the publishing agreement and bring the publisher’s attention to it; more than 90% of journal editors will comply.  You can also get an author addendum from Science Commons, who are working with SPARC and will soon offer plain-language and machine-readable versions like those that accompany CC licenses, as well as a web-based tool for choosing and preparing the appropriate addendum.

That covers copyright-based licensing, pretty much; but patenting is a whole different headache for Open practices.  Copyright inheres automatically (though there is a registry) in “original works of authorship” as soon as they are created, but patents are granted for inventions by way of a drawn-out administrative process and on a more complex basis than “who made this?”.  There are also important differences between patent laws in different countries.  The primary test-bed for open licensing approaches has been biotechnology and especially genomics, with particular emphasis on specific gene sequence data and databases .  The concern is that too much patent protection, combined with patents of too broad a scope, will stifle research and in particular exacerbate the difficulties faced by poorer nations in trying to establish research and development infrastructure.

One possible soution, at least for database information, is offered by the HapMap Project‘s “click-wrap” license.  Rather than assigning property rights, this is an end-user agreement that specifically disallows the patenting of genetic information from the database, unless such claims do not restrict others’ free access to the database.  This license has since been abandoned by the HapMap project, however, in order to allow integration of HapMap data into other public databases such as GenBank. 

Other solutions focus on assigning property rights in such a way as to permit Open practices.  Yochai Benkler suggests what he calls publicly minded licencing for universities and academic institutions.  This form of licensing would consist primarily of an “open research license”, whereby the institutions would reserve the right “to use and nonexclusively sublicense its technology for research and education”, and would require a reciprocal license to research such that any (sub)licensee must “grant back a nonexclusive license to the university to use and sublicense all technology that the licensee develops based on university technology, again, for research and education only”.  There is a model for this sort of scheme in PIPRA, a collaboration among public sector agricultural research institutions which employs licensing language that aims to protect humanitarian use.  In a similar vein, Benkler also suggests a second variety of licence, a “developing country license”, which would extend the open protections through development and manufacture to end-products such as drugs, so long as distribution was limited to developing countries.  Noting that University revenues from government research grants and contracts are at least an order of magnitude greater than those derived from patents, Benkler points out that the loss of certain licensing revenue would be minor at most.  The loss of the small possibility of a “gold-mine” patent would be more than compensated by gains in research efficiency and public perception of universities as public interest organizations rather than puppets of big business.

Science Commons has a more specific focus with its biological materials transfer project, which is aimed at retooling materials transfer agreements.  These are the contracts under which research laboratories exchange the physical objects of research — DNA, proteins, chemicals, whole organisms, and so on.  There is no standard format, since even the NIH Office of Technology Transfer‘s Uniform Biological Materials Transfer Agreement (UBMTA), despite wide support, does not cover all eventualities and is frequently modified or replaced with institution-specific MTAs.  I can tell you from experience that these things can be a nightmare.  The one I remember most clearly came from a large pharmaceutical firm which shall remain nameless; they were willing to send us some of their antiretroviral in pure form, provided we signed over our firstborn children and their children unto the seventh generation.  (I exaggerate, but you get the idea.  In the end we crushed up pills supplied by friendly clinicians, and the damn drug did nothing in our assay anyway.)  Science Commons’ efforts in this field have yet to bear fruit (that I know of), but given the Science Commons/Creative Commons track record I have high hopes.

There is also a more fully-developed model available.  The international nonprofit organization CAMBIA offers two BiOS licences designed to create and protect a “research commons” (the Plant Enabling Techology License and the Genetic Resource Technology license) and is currently drafting a third license for health-related technology.  The essence of these licenses is a reciprocality agreement similar in concept to copyleft or “share-alike”, such that

…licensees cannot appropriate the fundamental “kernel” of the technology and improvements exclusively for themselves.  The base technology remains the property of whatever entity developed it, but improvements can be shared with others that support the development of a protected commons around the technology, and all those who agree to the same terms of sharing obtain access to improvements, and other information, such as regulatory and biosafety data, shared by others who have agreed.

To maintain legal access to the technology, in other words, you must agree not to prevent others who have agreed to the same terms from using the technology and any improvements in the development of different products.

In addition to the licenses, CAMBIA maintains BioForge, an open-source platform for research collaboration on which the licenses and other open practices can be, as it were, field-tested.


Definition

I think “Open Science” is the banner under which the various Open X clans might most profitably assemble.  It is punchy, fairly self-explanatory and does not carry any of the potential confusion with related movements in software that might plague “Open Source Science”.  (Nor, for that matter, will it give rise to daft analogies about what exactly is science’s “source code”.)  Moreover, it seems a natural counterpart to the established term Open Access, and is apparently the term of choice for Science Commons/iCommons, which puts the considerable weight of the Creative Commons behind it.  My personal favourite (term and practice) is Open Notebook Science, but this seems better suited to being the name of the most open subset of Open Science practices since, as with Open Access, it is likely that a range of applications will co-exist and co-evolve.

A formal definition will have to wait for future conferences at which scientists and their allies can hammer out the Open Science equivalent of the BBB Declarations.  For now, I think the Wikipedia Open Science stub has the right idea in propounding a sort of meta-definition: “a general term representing the application of various Open approaches… to scientific endeavour”.  Andrés Guadamuz González ventures “the application of open source licensing principles and clauses to protect and distribute the fruits of scientific research”.   In a recent paper (sorry, subscription only; see how useful OA is?) Ibanez et al. put it this way:

The Open Science movement advances the idea that the results of scientific research must be made available as public resource. Limiting access to scientific information hinders innovation, complicates validation, and wastes valuable socio-economic resources. Open Science is an effective way of overcoming the nearsightedness of the contemporary obsession with intellectual property. The practice of Open Science is based on three pillars: Open Access, Open Data, and Open Source.

It seems to me that Access and Data are crucial by definition; you could do Open Science which relied on proprietary software, provided you made the raw data and your publications openly accessible.  It is, of course, more efficient to use software that is available to everyone without intellectual property or cost barriers.  Similarly, open standards and open licensing might not be fundamental to the practice of Open Science, but both make possible such vast increases in efficiency that I would argue for their inclusion in any comprehensive definition or declaration.

In short, Open (Access + Data + Source + Standards + Licensing) = Open Science.


Coda

Once again, this is an enormous topic and I have given only a brief overview; if you spot anything I have missed or got wrong, please leave a comment.  (I am a scientist, after all; I am thoroughly inured to being wrong in public.)  This was supposed to be the second of two essays on the future of science, but I have run out of room and time so there will now be a third instalment.  In that piece I will try to show what Open Science looks like now, in its infancy, and to sketch some of the directions in which it might grow.

Update: part 3 is here.

….

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

Monday Musing: Some Random Thoughts on the Trial of Saddam Hussein

On November 5, one year and 17 days after his trial began, Saddam Hussein was found guilty. Predictably, the trial was subject to criticism and questions of legitimacy before it began, and from quite different sides of the issue no less.

2a1b290a134b6b3506a3_2

In the lead up to the war but especially in the wake of the failure to find weapons of mass destruction, the brutality of Saddam Hussein’s regime was used to justify the invasion. Over his reign, Saddam’s attacks on his own civilian populations left nearly a few hundred thousand corpses, even by conservative estimates. If we hold him responsible for the Iran-Iraq War, we can add an additional 1 million killed or wounded.

Against this backdrop, the decision to try him for the massacre in the village of Dujail of 150 Shi’a men and boys following an assassination attempt was a surprise. Images of the weirdly named Anfal massacre of at least 50,000 Kurds (other estimates range into 100,000 to 180,000), especially of the chemical weapons attack on the village of Halabja, had been played regularly in the build up to the war. What the Dujail massacre had going for it was that it was straightforward and the great powers weren’t complicit. The trial thus already began with sweeping the dirt of the great powers, East and West, under the rug.

Halabjadavari_1

From the outset, questions of whether a government set up by foreign victors could legitimately try Saddam Hussein loomed over the trial. Mahathir Mohammed, Ahmed Ben Bella, Roland Dumas, and Ramsey Clark (now, there’s an interesting set) all moved to set up a joint committee to insure a fair trial. Respectable organizations such as Amnesty and Human Rights Watch questioned the independence and impartiality of the court and raise concerns about the trials fairness.

The trial itself saw assassinations, the resignation of judges, death threats against defense attorneys—all of which leads to questions of not whether Saddam did it, but whether the trial process itself was fair. It certainly wasn’t orderly.

I only sporadically followed the trial. I was skeptical that it would achieve much in terms of personal or political justice. Moreover, I was doubtful that it would do much in establishing the political legitimacy of the new Iraqi government. All of which just lead to me cringe or sigh as the reports came in on the trial’s progress.

I couldn’t really imagine anything other than a death sentence, rumors of Rumsfeld’s offer of leniency in exchange for Saddam’s calling on insurgents to disarm and surrender notwithstanding. The task for and before Iraq was its own transformation. Given the regime’s treatment of Shi’as and Kurds, and de facto Kurdish independence in the northern Iraq, it had to reconstitute itself as a polity if it was, is going to stay together. It was also to transition to a democracy. Against this backdrop, Saddam’s execution would have to be something like Cadmus’ slaughter of the cow to the gods in order to establish Thebes or the execution of Louis XVI, that is, something of a foundational sacrifice.

The trial, verdict and execution would be truth and reconciliation, memory and a break with the past, and the legitimation of the new political order and the nation, all of which would have to be achieved in the midst of occupation and civil war, as well as the patronage of the some of the more incompetent political figures in recent and even not so recent history.

Arendt

Maybe it was the constant invocation of the Nazis as an analog for Saddam, but part of me was hoping that out of the coverage of the trial would come the sort of reportage that Hannah Arendt filled the pages of The New Yorker with during the trial of Adolph Eichmann, pieces which became the basis of Eichmann in Jerusalem: A Report on the Banality of Evil.

There were enough parallel issues: the legitimacy of the court trying Saddam; the attempt to have the horrors of Iraqi Ba’athism become the foundation myth (in the sense of mythic, not in the sense of false or not true) which would create a continuity between the peoples of Iraq and a new Iraqi polity; the issue of complicity of Shi’a and Kurdish leaders, the West, the East bloc, China, the rest of the Arab world; the Pontius Pilate like reaction of much of the world to the trial; the nature of international law, crimes against humanity, and genocide. Moreover, there is a tale to be told of the hope and tragic descent into corruption and brutality of much of the post-colonial experience, a trajectory and narrative captured only on occasion and waiting to be captured in the form of the political theory-cum-reportage that Arendt deployed so well. Eichmann in Jerusalem, whatever its limitations, help us to understand something about modernity, the officialese of modern bureaucracy and ethics.

In Saddam Hussein and the experience of Ba’athism in Iraq, I imagine a similar tale could be told of the colonial aftermath, the Cold War, and the devolutions into thuggery.

I thought of Arendt and Eichmann in Jerusalem at the outset of the trial, but was strongly reminded of it after the verdict was read. Arendt famously writes of Eichmann as he goes to the gallows:

He begun by stating emphatically that he was a Gottgläubiger, to express in common Nazi fashion that he was no Christian and did not believe in life after death. He then proceeded: “After a short while, gentlemen, we shall all meet again. Such is the fate of all men. Long live Germany, long live Argentina, long live Austria. I shall not forget them“. In the face of death, he had found the cliché used in funeral oratory … It was as though in those last minutes he was summing up the lesson that this long course in human wickedness had thought us–the lesson of the fearsome, word-and-thought-defying banality of evil.

Right after the verdict was delivered, Saddam’s lawyer delivered a message from the convicted dictator.

“The message from President Saddam to his people came during a meeting in Baghdad this morning, just before the so-called Iraqi court issued its verdict in his trial,” Khalil al-Dulaimi told The Associated Press in a telephone interview from Baghdad.

“His message to the Iraqi people was ‘pardon and do not take revenge on the invading nations and their people’,” al-Dulaimi said, quoting Saddam.

“The president also asked his countrymen to ‘unify in the face of sectarian strife’,” the lawyer added.

If in the original idea of the war crimes and at the same time political trial, verdict and execution were to pave the road for the creation a new democratic, and unified Iraq, then Saddam was clearly attempting to steal the thunder and make his death the founding of a new Iraq in a different way. He would go off to the gallows like a patriarch whose last request of his children is to be more decent and united. It echoed a bad Bollywood movie. But if he was acting like a patriarch whose dying request of his children was to be generous, tolerant, and forgiving, it was more as a patriarch who had molested and brutalized them through out his life. Needless to say, the pleas weren’t being heard.

I avoided watching or following the trial because it was, despite its attempts at uncovering the crimes of Saddam Hussein in Dujail, an exercise in self-deception for all parties involved, and not because it was a victor’s justice. Self-deception, as the late political and moral philosopher Bernard Williams liked to point out, involves a conspiracy between the deceiver and the deceived. The idea that the choice of the Dujail massacre was anything other than a political choice, that the court was really interested in truthfulness, or even the creation of a new Iraq was the deception that all but a few purchased. If the trial of Saddam Hussein was to be one of the first attempts to address the new Iraq responsibly, then it has failed miserably. And we’ve lied to ourselves in thinking so.

Back to back, the self-delusions of Saddam Hussein and the self-deception of the coalition forces do offer lessons. But these seem hard to articulate. The trial, the verdict and the response itself seem to be lessons in (and the inversion has been used before) the evil of banality, of what happens when the quest for deeper political truth and the pressing political concerns of the community are subsumed to the interests of narrow parties. Let’s hope the Anfal trial fares better and that Iraq, its past, and future are more responsibly addressed.

Monday, November 20, 2006

Ern Malley: Doppelgänger in the Desert

Australian poet and author Peter Nicholson writes 3QD‘s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Here is the curious (curioser and curioser) case of Ern Malley, an entirely fictional poet, invented in 1943 to expose what the perpetrators thought of as Modernism’s foolishness. James McAuley and Harold Stewart spent an afternoon, apparently, putting together assimilations of quotations and extracts from policy documents about the breeding of mosquitoes, and other sources. Their friend, the poet A. D. Hope watched, at a distance, over these events. McAuley and Stewart created a poet, Malley, a garage mechanic who had unfortunately succumbed to Graves’ disease at exactly the same age as Keats, leaving behind him a manuscript, carefully worn to look like the real thing. Malley’s ‘sister’ Ethel sent the manuscript of The Darkening Ecliptic to Max Harris, editor of the avant-garde literary magazine Angry Penguins, who published the work enthusiastically. (Angry Penguins comes from a Harris poem: ‘as drunks, the angry penguins of the night’.) Then followed the revelation of the hoax, much to the chagrin of Harris. Subsequent charges of publishing an indecent work added to the surreal aura surrounding this cause célèbre in Australian literary history.

Australia certainly has a lot of desert to contend with, but there can be deserts in the mind too, and the Ern Malley affair, as it has come to be known, does show a certain propensity for literary politicking and obstructionism that has not been without subsequent issue. It might be argued that the Malley affair gives a foretaste of the navel-gazing propensities of the poetry world which the general public have subsequently given the cold shoulder. The free exchange of ideas, generally regarded as the sine qua non of intellectual discourse, has sometimes had a hard time of it in Australia. Even now, one is likely to encounter violent squalls that would frighten the birds from the sky, or provide satirists with fruitful fodder. Fortunately, the Internet demolished all the old redoubts and, at last, there now really is a free exchange of ideas. All the same, I don’t think Australian culture is nearly as well known as it ought to be, though some recent successes of the film industry and the opening of the Aboriginal art component of the Branly museum in Paris show positive moves forward.

One would have thought the Ern Malley character and his literary works might have died off subsequent to the revelations of the hoax, but such has not been the case. There is something in the character of Malley, some aspect of the Australian temperament, which still appeals to writers, painters and composers. The artists Sidney Nolan and Garry Shead both produced a series of works based on Malley. There has been an Ern Malley jazz suite. Peter Carey wrote a novel that used the Malley story as a template—My Life As A Fake. After all, the mechanic who died so young, mirrors many a real tragedy—Henry Lawson’s alcoholism, Francis Webb’s struggles with schizophrenia, Brett Whiteley’s drug overdose in a Thirroul motel room. These were artists who had achieved important art. They had been recognised. Yet their deaths raised the lingering question—did art really matter in the wide, brown land? Was seriousness possible in a country that took a pride on kicking the stuffing out of anyone who took themselves seriously enough to take art seriously? Sidney Nolan’s brutal Malley portrait would seem to suggest a certain self-loathing, or hopelessness. It seemed there was always going to be the doppelgänger waiting in the desert to pull the mat from under artistic pretensions, either the artist’s lesser self, doubting the art made, or the less tangible antagonism, or indifference, of critic, cultural commissar and public. What better symbol of this negativity than the Ern Malley affair with its amalgam of farce and hostility, creativity and cultural atavism.

The Malley poetry is mixed in quality, but there is one poem that strikes at the root of the Australian experience: ‘Durer: Innsbruck, 1495’.

                      I had often, cowled in the slumbrous heavy air,
                      Closed my inanimate lids to find it real,
                      As I knew it would be, the colourful spires
                      And painted roofs, the high snows glimpsed at the back,
                      All reversed in the quiet reflecting waters—
                      Not knowing then that Durer perceived it too.
                      Now I find that once more I have shrunk
                      To an interloper, robber of dead men’s dream,
                      I had read in books that art is not easy
                      But no one warned that the mind repeats
                      In its ignorance the vision of others. I am still
                      The black swan of trespass on alien waters.

As has been commented by Herbert Read and others, this is the real thing. The hoax poem becomes, even against its makers best intentions, a serious work of art. How often in Australia has the satirical shorn off into melancholy, savagery and the dark, bitter, sunburnt tragic mask. The harpy from Moonee Ponds hell with a chainsaw in her mouth, Dame Edna Everage (average, get it), Barry Humphries’ vitriolic creation, is just one fictional character you feel fictional Ern could have had earnest communications with. One problem: Ern ‘died’ before Dame Edna was ‘born’. Still, they could have  metaphysical communications, like Laura Trevelyan and Voss in Voss. How often has the Australian artist felt the sting in the tail of ‘I am still / The black swan of trespass on alien waters’. Dame Edna intrudes with her gladioli and mocking sideswipes. Art intrudes with its unwanted psychological complexities, its unruly passions, its refusal to stick to any ordained historical script.

In retrospect it seems far-fetched to think that such a hoax could have held back Modernism’s tempests in Australia. Art will out and have its say, whatever the oppositions involved. That is simply the nature of art. The Australian cultural melting pot was never going to be constrained by Malley-type hijinks, and Australian culture bifurcated in the decades following on from the nineteen-forties with some astonishing efflorescences, the diverse styles of Aboriginal art being just one example. Poets of every stripe crossed the continent, perhaps a little like the endangered (now extinct) Thylacine, desperate for an audience and therefore sometimes likely to go troppo and savage one another when audiences, or contracts, were in the offing.

Max Harris is a much more important figure than he has been given credit for in Australian literary and cultural history. Not only was he the person on whom the whole Malley fracas descended. Harris was only in his early twenties when he made his editorial decision to publish the Malley poems. Youth is not always wasted on the young, and here youth achieved, with exuberance and delight, the rarest publishing gesture—courage to believe in something new. Here the word became deed. Then followed the vituperation and philistinism. Harris bore with it and kept on speaking up for Australian contemporary modernism before many of us had seen the light of day, encouraging other artists to go forth and multiply. Along with John and Sunday Reed, Harris helped stimulate ‘the vegetative eye’ (a Harris novel scorned by Hope), an eye rinsed clean in the crashing surf and brilliant light of Australian landscape and idiom.

The last words in the suite of sixteen poems that comprise the Ern Malley legacy are ‘Beyond is anything.’ This was meant, I guess, as a last mordant commentary by the originators of the hoax on the perceived hopelessness of the Modernist cause. Well, Modernism has had its day, as have so many other ‘isms’. But art continues to prosper in the unlikeliest places, but not inexplicably, since art is essential to the human. Such anarchic splendour in the mallee scrub! ‘Beyond is anything.’ And if this were to turn out to be true . . .

Here is my fantasy poem in which Max Harris, poet, critic, publisher, bookseller and cultural provocateur, marries an anagram of Ern Malley. In fact, Harris married Yvonne Hutton. They had a daughter, Samela. ‘Brilliant deserts as the prophets come’ is taken from A. D. Hope’s ‘Australia’: ‘Hoping, if still from the deserts the prophets come, // Such savage and scarlet as no green hills dare / Springs in that waste.’ The last line of the Malley Dürer poem is referenced in line seventeen.

 

   Homage To Rema Nelly
               I.M. Max Harris

Glorious niece of language’s funambulist,
Side-stepping safe, orders skewiff,
Where seem is dream, and what’s invented lives.
Your uncle, nuncle, major poemquake shifted
Alphabets to greetings, greenings, ghosts
Of Paris, absinthe visions spread
Over Dali sunsets, time stretched, drowned under.

You toyed with our sedate revisions written,
Our daubs and music stillborn at first hearing,
Your lightning dances trampling on our thighs
Which we had heaved to blurt or cauterise.
Temptress under arcades of forgetting,
We honour what you gathered, rosy splinters
Stuck in shards, then pushed through sunburnt blisters.

To maximum, dear Max, chosen vessel
Of the hope to renovate, renew,
You trespassed on the alien waters flooding
Round our dull collectives and mute souls.
Max, you married Rema, and your children
Now are found in stranger corners trudging
Brilliant deserts as the prophets come.

‘If it’s unfelt it’s not worth buying’,
Rema says to Max, and art should be well-felt,
Felt up to rainbow prisms, down echidna spikes
Roughing the threatening ghost gums of the night.
Here’s to Max and Rema! May they live
Beyond these present realms of dire delight
To help us make the penguins angry with creative might.

Written 2003

Monday, November 13, 2006

Dispatches: The Disenchantment of 11 Spring Street

Candlebuild1_1

Sitting on the northeast corner of New York City’s Elizabeth and Spring Streets, the Candle Building was for many years a mystery to passersby.  The five-story brick building has always-closed arched entrances on the ground level.  The bricks are brown and the windows are decorated with a repetition of the arch motif, this times as small arch details above pediments that serve as the windows’ top edges.  Inside these facade arches are small equine seals.  At the top level, the windows themselves are arched, tall, and narrow, while at ground level the ceiling height must be at least fourteen feet.  The whole building is covered in sooty grime, and at the ground level with an almost archaelogical layering of wheatpaste and paint.  In a neighborhood that features mostly rickety, turn-of-the-century tenement buildings with fire escapes and apartments, the Candle Building doesn’t really make sense – it clearly has some other purpose.  There are other tenement facades in the neighborhood (columns from an Italian villa, small neoclassical friezes) that have baroque touches, and there are outsized Beaux-Arts banks, but the Candle Building’s combination of humble brick, grandiose scale and giant entrances is inexplicable. 

I do not describe the building from memory – I’m looking at it right now.  I occupy the southwest corner of Elizabeth and Spring Streets, and the Candle Building is visible from all five of my windows.  When I first moved in, seven years ago, the building stayed completely inert until dark, when each window was illuminated by a small light and a dramatic pair of drawn curtains, and later, a string of little lights.  You never saw who entered or exited the place.  This clear demonstration that one person or group of people occupied the five-story structure was unbelievable and spooky.  It was a staged haunting made uncanny by its vast theatre.  At that time, ours was a humbler corner.  (A sleepy bodega, a Taoist temple, a laundromat and a Dominican barbershop have been replaced or joined by five restaurants, three bars, four boutiques, a hair salon, and a wine shop.)  So the dark speculations of the time tended towards the urban gothic, and, this having once been Little Italy, usually involved the mafia. 

One day that first year my landlord, a friendly man in his eighties who had lived his whole life on this block, was sitting in my apartment regaling me with stories of Golloo, a friend who had died in the fifties from drinking too many bottles of Coca-Cola too fast.  As Sonny’s small dog, Tiny, waited patiently, I asked Sonny about the building.  “Oh, the stables?”  With that one word, almost everything strange about the architecture of the building was resolved: the oversized arches, the ground floor’s height, the humble grandeur of the detailing.  Sonny told me that he remembered horses being kept inside.  With the next few sentences, he resolved for me the rest: the owner was a designer who lived alone and worked inside, and tended to the display.  There were other huge structures owned by single artists in the neighborhood, such as the photographer Jay Maisel’s giant bank on Spring and Bowery; they’re relics of a time when this neighborhood was a different kind of frontier.  It all made sense.

I was an initiate.  I could bandy this knowledge myself, letting people into the secret of 11 Spring Street if I chose, usually with a casual tone to denote my world-weariness and long familiarity with all New York secrets.  Almost as though it happened because I now knew, I began to see the owner occasionally, furtively exit the building’s side door.  But by this time, another development was occurring.  I’d begun to notice that the copious graffiti on its walls.  Now there used to be a lot of good walls in Soho for street art, because there were more abandoned buildings or shells to paint on without worrying about someone blasting the wall clean again.  But the Candle Building, along with a building on Wooster between Canal and Grand, was a mecca.  A couple of years ago, a friend began documenting the walls of the Candle whenever she happened to be over, and I started identifying and following the various practitioners who appeared there.  I did this often with the help of the website Wooster Collective, whose owners are probably the more important collectors of this kind of art, the unofficial curators of this world.  The Candle Building constituted a kind of intergenerational commitment to the creative and the strange, the irregular and unofficial.

Then, in one of those neighborhood transition-marking moments, the building was put up for sale.  My first thought was: I’ll buy it.  I called everyone with money I knew, found out the square footage, what kind of Certificate of Occupany there was, how much renovation costs might be.  I enlisted a friend, an architect who had worked for Rem Koolhaas, to help me.  For a while I said “C of O” like a professional.  I fantasized that we would rent the units for no profit to people who would appreciate what it meant to live in the Candle, who would be sickened if I even mentioned steam-cleaning the graff.  The place would be a bulwark, not against gentrification exactly, but against tastelessness.  Then the realtor finally told me the price: six million dollars.  The gig was up, and I wondered who would put up that kind of money, who could afford the payments while a couple of years of cleaning and renovation went on.  The answer was… Lachlan Murdoch, who’d been put in charge of the Post and wanted, sociopathically, to turn the whole place into his palatial urban residence.  Things were going from bad to worse; not only would I not rule the domain, but the son of Rupert Murdoch would.

Lachlan must have screwed up his nepotistic assignment, however, because after two years of owning the place, but thankfully leaving the exterior untouched, he left the Post and moved back to Australia.  By this time, street art had become a phenomenon, with major galleries promoting its new generation, and celebrated figures dropped by the Candle and other sites on Elizabeth frequently – there were Os Gemeos figures, Bast paste-ups, Rambo tags, etc.  It’s not an exaggeration to say that the building enjoyed worldwide fame, amongst a certain expanding subculture.  Things seemed to be in a kind of stasis.  Places in New York have a kind of half-life to them, in that the leaseholders and owners of properties and premises often last well into an oncoming stage of gentrification.  For a time, the property values a neighborhood commands remain hidden by stabilized tenants and owner-owned butcher shops.  Then, as leases expire, lessors pass on and owners cash out, the emergent identity of the neighborhood becomes apparent; developers and their more efficient business models snap up tawdry hotels and mysterious horse stables.  Which is, of course, what has now happened to the Candle.  Having been bought by a husband-and-wife development team, it is slated to be turned into three luxury triplexes and a floor-through apartment, with a new structure to be appended to the roof. 

Two weeks ago, I noticed, in broad daylight, a huge version of the London Police character being painted.  A couple of days later, I looked out my window to see Michael deFeo painting his happiness-inducing flower, giant-size, right next to it.  In plain sight?  Something was afoot.  I wandered down and learned that Wooster Collective has organized a sort of final exhibition at the Candle, with many major works to come, both on the walls and inside the building, which I got to look into for the first time ever.  New stuff is all over the place; it’s exciting.  All of this is being done with the approval of the new owners, one of whom apparently majored in art history, and will conclude with a party before the renovation and final scraping of the building’s exterior.  The scene on the corner has begun to resemble a circus of passersby snapping photos, artists painting and wheatpasting on ladders and scaffolds, and Marc and Sara of Wooster playing congenial ringmasters.

Take a look before December 16th, for sure.  And don’t be sad about the demise of the enigma that was the Candle: this isn’t that neighborhood anymore, and there’s no good reason for it to become a frozen museum to itself.  It is very thoughtful of the new developers to allow this reprieve.  But be warned: the whole thing has a slightly safe, invited feeling to it: it’s street art as conventional public art.  As Marc has pointed out to me, street depends on its illegality for some of its insurgent power.  That’s what makes it so philosophically interesting: it’s an intervention against the state’s and the advertisers’ right to control public space.  By that standard this final celebration of 11 Spring is not exactly street.  Even the giant scale the artists are able, without fear of arrest, to work on seems, paradoxically, to diminish their work by making it too obvious.  They don’t pop out at you, like street pieces usually do, a delightful irruption of artmaking where you don’t expect it.  But the enchanted secrets of cities will still be found elsewhere.  They’ll hide themselves again.

The rest of Dispatches.

Old Bev: Hair (Summer 2001)

Km3_1 Esther was the sister of a close friend of mine, and we were in a hair salon in a lavish resort in Fethiye at the end of several weeks in Turkey.  Our other friends were somewhere else, maybe on a boat or in the bar, and I was sitting in a cracked leather couch in yellow room while Esther had her face helmeted with bangs and squarish layers. I was tired and eighteen and drinking a can of sour cherry juice and I was staring at a laminated picture of Kate Moss in a blue binder.  She had very short hair.

“I can give that to you.”  Suddenly Adem the hairdresser was crouched in front of me.  He brushed hair out of my eyes and touched my chin.  He had no hair himself and a vague jaw line; he was older than me by a lot. Later that night we were at the disco in the resort and a song by Tarkan came on and it was one of the few Turkish songs I knew.  Adem materialized (he had a talent for this) and grabbed me and led me through an extravagant sort of tango that required me only to sort of relax into him and move my feet fast enough not to be stepped on.  It was a spastic dance but had a loose logic that kept us from banging into any of the other four couples on the floor who were all engaged in a style of grinding I’d never seen before, grinding with a lot of footwork.  I remember holding on to Adem’s back through his slick pastel tee-shirt and my other hand clamped in his grasp.  It felt like The Scrambler, this crazy amusement park ride I loved, except that when I would exit that attraction the ticket taker wouldn’t gather my hair in his fist at the nape of my neck and ask me if he could please cut it.

A week after I returned to California (still with long hair) I started work at a small, family owned camera shop.  I ran the register, dusted the frames with some scraggly feathers, and kept the film processing envelopes organized.  When one of the owner’s daughters, an army vet who actually did teach me to tango, didn’t show up I was responsible for scanning and color correcting negatives and burning them to CD.  It was an okay job and when we had no customers my coworker Paul and I would play air hockey with crumpled paper and compressed air.  Paul studied photography at the community college and made long lists of qualities he desired in a wife.  When we weren’t playing our air hockey tournament or dusting, Paul would show hundreds of his photographs to Ned, our senior salesperson.  Ned hated Paul’s work.  I tried to be encouraging and pointed out pictures I liked, but I couldn’t do much to blunt the criticism.  At the end of the summer Paul presented me with his final list of desirable wifely traits.  Pious, Good with Money, and Long Hair were three that stuck out to me.

My hairdresser was named Douglas and his salon was meticulously decorated in a spare, popular “zen” kind of style, and Madonna’s Ray of Light album was a favorite soundtrack.  After the shampoo Douglas would do something I liked very much, tuck a towel over my wet hair and behind my ears and put some good smelling oil on my forehead.  He’d press it right above my eyebrows with his index finger and then drag the oil up very quickly in a short little line.  Then he’d leave the room very purposefully and I’d sneak looks at him through the glass door; he’d be in the hall eating a little sandwich or hardboiled egg.  The hair cut was always too short and too bouncy and once he put some red dye in without asking me.  We had a close relationship.  When he cut my bangs a few years later he didn’t mind when I grabbed his leg.  I was nervous.  At the time I stuck with a single length, right at the shoulders, parted straight down the middle.  I thought something might change if I switched.

At the end of the summer I got a call from Ethan, who I hadn’t heard from since the sixth grade.  I guess we had been sort of boyfriend/girlfriend in elementary school – he gave me a pen with three different colors of ink – but no declarations were ever made.  We were both skinny and I had some huge glasses and braces and he was covered in freckles.  His most distinctive feature was a shock of red hair, bright, beautiful orangey red that I had always wanted to touch.  I told him over the phone to take me to lunch and he showed up at my house that weekend in his father’s convertible.  I was in the kitchen, craning my head around so that I could see him exit the car and the first thing I noticed was that Ethan’s red hair was now on his face in a careful goatee.  I don’t remember much of the lunch or him driving me home, but I know that at one point he touched his jaw and found a small patch he’d missed with the razor, and then sat with his chin in his hand to cover the spot. 

Monday Musing: Cocktail Party Conversation Permit

Frg0061dIt is a frequently observed phenomenon that the less educated and intelligent people are, the more they tend to have decisive and strong opinions on the most complex political, philosophical, economic, and other pressing issues. You know the kind of person I am talking about, the one who is eager to quickly diagnose and solve a world problem or two with a single profound proclamation at every cocktail party. Like the two urbane and seemingly well-educated and well-dressed slightly older gentlemen I once overheard at a dinner party in Karachi (and there are plenty here in America, or anywhere for that matter) saying with great conviction (and with extremely thoughtful expressions on their faces, and in ponderous cadences, as if they were straining under the burden of a massive feat of cognitive strength and skill):

1st Guy: “Pakistan’s only problem has always been that our leaders lack sincerity.”
2nd Guy: “No, no, no. Our only problem has always been that our leaders lack committment.”

The first guy then actually carefully considered this pearl of wisdom from political philosopher and all-round theorist #2 and finally, having reevaluated his own sophisticated worldview in the light of this new gem, dumped it unceremoniously, humbly but gravely declaring defeat: “Yes… I see… you are right… it is a matter of committment.” In the throes of the cringing frustration one feels when faced with this sort of cretinism, I have sometimes felt that people should have to be licensed to spew profundities at cocktail parties, otherwise they should only be allowed to speak about either the weather or quantum theory. And the license would be received after demonstrating the ability to think about really, really, simple problems by passing a test. The idea, of course, being that if you can’t think lucidly, logically, creatively and successfully about very simple problems where all the information required to solve them is present in their statement, and which have very clear and demonstrable solutions, what the &$@# makes you think you should be engaging hard and incredibly complicated and intricate issues?

Okay, okay, for the last nine days or so I was out of town and very busy and that is my excuse for not writing a substantive column today. (Perhaps some of you noticed that I wasn’t posting all of last week?) Instead, now that I have given you some motivation to try and think about simple problems, I present a challenge to you: solve some logical and mathematical puzzles that my friend Alex Freuman sent me. Alex teaches high school physics and math at La Guardia High School here in Manhattan. (It was the model for the high school in the movie Fame.) I had seen some of the puzzles before but not others, and it took me a while to solve some of those. The first person to email me (click “About Us” at the top left of this page for my email) a full list of correct solutions, wins the privilege of writing one of our Monday columns for November 20th. Okay, so it’s not a huge prize, but hey, if you’ve got something to say, here’s your chance. And, of course, you will have earned the cocktail party conversational permit as far as I am concerned.

Screenhunter_5_7Don’t look up the solutions, and please don’t post solutions in the comments. Try to do all of them yourself. Believe me, even if you have to think for some days about a problem before you get it, there is a huge satisfaction and mental reward in doing so yourself. And you will feel more confident of yourself too. I shall, of course, trust you not to cheat. Here they are:

  1. You are given two ropes and a lighter. This is the only equipment you can use. You are told that each of the two ropes has the following property: if you light one end of the rope, it will take exactly one hour to burn all the way to the other end. But it doesn’t have to burn at a uniform rate. In other words, half the rope may burn in the first five minutes, and then the other half would take 55 minutes. The rate at which the two ropes burn is not necessarily the same, so the second rope will also take an hour to burn from one end to the other, but may do it at some varying rate, which is not necessarily the same as the one for the first rope. Now you are asked to measure a period of 45 minutes. How will you do it?
  2. You have 50 quarters on the table in front of you. You are blindfolded and cannot discern whether a coin is heads up or tails up by feeling it. You are told that x coins are heads up, where 0 < x < 50. You are asked to separate the coins into two piles in such a way that the number of heads up coins in both piles is the same at the end. You may flip any coin over as many times as you like. How will you do it?
  3. A farmer is returning from town with a dog, a chicken and some corn. He arrives at a river that he must cross, but all that is available to him is a small raft large enough to hold him and one of his three possessions. He may not leave the dog alone with the chicken, for the dog will eat it. Furthermore, he may not leave the chicken alone with the corn, for the chicken will eat it. How can he bring everything across the river safely?
  4. You have four chains. Each chain has three links in it. Although it is difficult to cut the links, you wish to make a loop with all 12 links. What is the fewest number of cuts you must make to accomplish this task?
  5. Walking down the street one day, I met a woman strolling with her daughter. “What a lovely child,” I remarked. “In fact, I have two children,” she replied. What is the probability that both of her children are girls? Be warned: this question is not as trivial as it may look.
  6. Before you lie three closed boxes. They are labeled “Blue Jellybeans”, “Red Jellybeans” and “Blue & Red Jellybeans.” In fact, all the boxes are filled with jellybeans. One with just blue, one with just red and one with both blue and red. However, all the boxes are incorrectly labeled. You may reach into one box and pull out only one jellybean. Which box should you select from to correctly label the boxes?
  7. A glass of water with a single ice cube sits on a table. When the ice has completely melted, will the level of the water have increased, decreased or remain unchanged?
  8. You are given eight coins and told that one of them is counterfeit. The counterfeit one is slightly heavier than the other seven. Otherwise, the coins look identical. Using a simple balance scale, can you determine which coin is counterfeit using the scale only twice?
  9. There are two gallon containers. One is filled with water and the other is filled with wine. Three ounces of the wine are poured into the water container. Then, three ounces from the water container are poured into the wine. Now that each container has a gallon of liquid, which is greater: the amount of water in the wine container or the amount of wine in the water container?
  10. Late one evening, four hikers find themselves at a rope bridge spanning a wide river. The bridge is not very secure and can hold only two people at a time. Since it is quite dark, a flashlight is needed to cross the bridge and only one hiker had brought his. One of the hikers can cross the bridge in one minute, another in two minutes, another in five minutes and the fourth in ten minutes. When two people cross, they can only walk as fast as the slower of the two hikers. How can they all cross the bridge in 17 minutes? No, they cannot throw the flashlight across the river.
  11. Other than the North Pole, where on this planet is it possible to walk one mile due south, one mile due east and one mile due north and end up exactly where you began?
  12. I was visiting a friend one evening and remembered that he had three daughters. I asked him how old they were. “The product of their ages is 72,” he answered. Quizzically, I asked, “Is there anything else you can tell me?” “Yes,” he replied, “the sum of their ages is equal to the number of my house.” I stepped outside to see what the house number was. Upon returning inside, I said to my host, “I’m sorry, but I still can’t figure out their ages.” He responded apologetically, ‘I’m sorry. I forgot to mention that my oldest daughter likes strawberry shortcake.” With this information, I was able to determine all of their ages. How old is each daughter? I assure you that there is enough information to solve the puzzle.
  13. The surface of a distant planet is covered with water except for one small island on the planet’s equator. On this island is an airport with a fleet of identical planes. One pilot has a mission to fly around the planet along its equator and return to the island. The problem is that each plane only has enough fuel to fly a plane half way around the planet. Fortunately, each plane can be refueled by any other plane midair. Assuming that refuelings can happen instantaneously and all the planes fly at the same speed, what is the fewest number of planes needed for this mission?
  14. You find yourself in a room with three light switches. In a room upstairs stands a single lamp with a single light bulb on a table. One of the switches controls that lamp, whereas the other two switches do nothing at all. It is your task to determine which of the three switches controls the light upstairs. The catch: once you go upstairs to the room with the lamp, you may not return to the room with the switches. There is no way to see if the lamp is lit without entering the room upstairs. How do you do it?
  15. There are two gallon containers. One is filled with water and the other is filled with wine. Three ounces of the wine are poured into the water container. Then, three ounces from the water container are poured into the wine. Now that each container has a gallon of liquid, which is greater: the amount of water in the wine container or the amount of wine in the water container?

In case no one gets all the answers right, the highest score wins. In the case of a tie, whoever gets me the next correct answer first wins. And keep in mind that by no means am I suggesting that everyone should get the solutions of all the problems. Some of them are hard, and if you can’t figure them out, don’t worry about it. But keep trying! Thanks for sending the problems, Alex, and sorry but you are disqualified.

Ready, set, go!

UPDATE: We have a winner!

UPDATE 2: Answers here.

My other Monday Musings can be seen here.

Monday, November 6, 2006

A Case of the Mondays: It’s Not Oppression Alone

In previous installments of this column, I’ve written about racial oppression, and about how European racism against Muslim minorities is the primary fuel of modern Islamist terrorism. But now I feel I must explain that violence and extremism in general do not follow from oppression alone. Oppression helps nurture both, but what is important is not so much the reality of oppression as the perception of oppression, and the expectation that violent extremism can usher in a non-oppressive situation. This explains why many of the symptoms of Islamist extremism in Europe also exist among Christian conservatives in the United States, even though they are far from being downtrodden.

First, the narrative of oppression is central to every radical ideology. Almost invariably, every radical of any kind believes he is being suppressed by some abstract enemy: the Jews, the liberals, the West, secularism, science, communism, capitalism, white people. This belief has nothing to do with reality, and even when the group the radical claims to represent is oppressed, the radical will seldom join in more mainstream action to combat oppression, or recognize when things get better. Black nationalists decried Martin Luther King’s marches as displays of obsequity; Christian fundamentalists gloss over the ACLU’s protection of civil liberties in face of sometimes hostile school superintendents; communists refused to cooperate with social democrats even when Hitler was throwing both to concentration camps equally.

So the question of what causes violence is not the question of what causes radical ideologies to appear, but what causes large numbers of people to accept them. Real oppression certainly helps, since there tends to be an inverse correlation between the level of inequality between a country’s majority ethnicity and its minorities, and the level of violence minorities engage in. Put another way, the two countries where there is relatively little socioeconomic discrimination against Muslims by Western standards, the United States and Canada, are the two countries where Muslims are least likely to enlist in Jihadi organizations.

But a theory of what causes violence has to be more complex than that. Atheists and homosexuals, two marginalized minorities in most countries with a strong religiously conservative streak, have never engaged in terrorism, unless one counts communists who also happened to be atheists. African-American riots are an exceedingly rare phenomenon. In forty years, radical feminists have produced exactly one terrorist, mentally unstable Valerie Solanas. Before partition became obvious in India, anti-colonialist activism was non-violent. And in contrast, the KKK was never oppressed.

In all cases where terrorism occurred, there was a strong perception of oppression, even if it was really practiced by a dominant group that considered equality oppressive. Klansmen seriously considered the fact that black people could vote a bad thing for white people. Various factors then pushed many Southern whites toward radicalism, such as being told by Northerners first not to enslave black people and then to desegregate. In similar vein, the Nazis could scapegoat Jews and communists as responsible to the misery of Germany, and thus convince large numbers of Germans that these two marginalized groups were actually oppressing the German people.

In contrast, any form of oppression that does not have an element of socioeconomic inequality or obvious legal marginalization will be glossed over. In the United States, secularist activists usually understand how the government routinely violates separation of church and state, but most nonreligious people can easily live their lives without seeing these violations as a yoke. Even when inequality is glaringly obvious, as in the case of gays and lesbians, without systematic impoverishment people have too much to lose from engaging in violence.

Groups that are not really being oppressed find their most zealous supporters among the lower classes. I’ve already noted that the lower classes are likelier to engage in crude racism against lower-ranked groups than the upper classes; this also applies to terrorism, since not only do they have relatively little to lose from committing terrorist acts, but also they already tend to view their situation as miserable and are susceptible to scapegoating. Upper-class whites in the United States don’t need to vent their anger by committing hate crimes against black people, and upper-class American Christians are comfortable enough with their material situation that they are in no rush to embrace Dominionism. Dominionist leaders are upper-class, but they fall under the rubric of radicals, so the important question is not about them but about their followers.

So at a minimum, the idea that marginalization causes violence and terrorism should be refined to “the perception of marginalization, mediated by socioeconomic inequality, causes violence.” But even that is not enough, because it can’t explain why there has been relatively little black terrorism in the United States, and why Islamic terrorism only flourished in Europe in the aftermath of 9/11.

In my post about Islamism’s watershed moment, I noted that European Jihadism arose after 9/11 because of Bin Laden’s inspiration. The same can be said in the other direction about marginalized groups that elected to resist oppression with civil disobedience. Just as Bin Laden became a role model for disgruntled Muslims, who then started to emulated his terrorist tactics, so did Martin Luther King inspire African-Americans and Gandhi inspire Indians to be non-violent. Neither of the latter two inspirations worked perfectly, but their presence correlate with far below average levels of violence on the part of these two groups.

Finally, the last complication to this model is that the perception of change can easily color the perception of oppression. In communist Eastern Europe, the people didn’t revolt at the height of poverty and repression; they revolted when things seemed to be slowly getting better, but then stagnated or improved too slowly. Without the inspiration of a leader who can convince people to undertake direct activism, regardless of whether it’s violent of not, people who are steadily oppressed accept their oppression as a fact of life. They start trying to change things only when they feel that good things go away—that their privilege is evaporating, in case of groups that are not really oppressed, or that equality is proceeding too slowly and politicians’ support for it is duplicitous, in case of groups that are truly oppressed.

This explains why extremism, both violent and nonviolent, arises, even when the group that practices it is far from oppressed. It’s a more accurate rendition of the thesis that religious fundamentalism is merely a reaction to encroaching secularism; in fact it’s not a reaction to encroaching secularism or to the economic failures of modern capitalism, but a consequence of scapegoating certain classes of people. Christian fundamentalism in the United States, Muslim fundamentalism in the Middle East, and Hindu fundamentalism in India arise not from the failures of secularism, but from charismatic leaders who cause people to focus on hated outsiders.

On the other hand, the formulation that oppression causes extremism is a fairly good approximation. From a historical perspective, the role of perception is critical. From a policy one, the government can change none of the factors influencing violence, except the actual level of oppression, and, by proxy, the perception that things are improving. By and large, we can take oppression combined with the right inspiration to be the main cause of violence, and then say that some perception-related factors can cause oppressed groups not to commit terrorism and non-oppressed ones to engage in violence.