Ocracoke Post: On the AFI Top 100 Films of All Time Ever

Were you paying attention when CBS broadcast the American Film Institute’s Tenth Annual “100 Years, 100 Movies,” a list of the Top 100 American Movies Ever Made? If you missed it or the ensuing critical flap, the two lists are side by side here at the Wikipedia. Part of the idea was to compare how this year’s list would stack up against the AFI list from ten years ago. You wouldn’t expect Citizen Kane to vanish suddenly from the Top 100, but there have been some pretty good films made in the last ten years, from The Big Lebowski (denied!) to the Lord of the Rings (#50) series. Don’t worry, I’m not writing this to complain about the AFI choices, while clutching my goatee and watching my battered copy of Last Year at Marienbad frame by frame again. Even if AFI did put three Spielberg films ahead of Double Indemnity last time, the world probably won’t end. (How Toy Story got on the new list is a mystery for the ages, but then again how can you complain about the addition of Sullivan’s Travels?) I just want to add a few titles from the last ten years that didn’t appear on the show. These are great movies, most by great living directors. Some of these films get the respect they deserve and some are great or very good films that don’t. I’ve listed nine films here so that you can provide 3QD with the tenth film in the last ten years overlooked by the AFI.

Deconstructing Harry (1997) or Celebrity (1998)Celeb
So you don’t like Woody Allen’s late style. I feel sorry for you. Stop confusing the man with the artist. Go back to Allen and appreciate him while he’s still around. In Celebrity, you can watch Kenneth Branagh try to be Woody Allen, and the ever-astounding Charlize Theron puts in an extraordinary performance as a pretty messed up model.

The Game (1997) or, yes, I’m sorry, but Panic Room (2002)
David Fincher’s take on the thriller genre in The Game is a paranoiac’s dream. It’s difficult to describe why this movie is so much fun, but it is. Sean Penn buys his brother, Michael Douglas, a very special “experience” for his birthday. Panic Room inspires nothing like the emotion promised by the title in most people I know, but they’re just wrong. Great performances from already classic Jodie Foster and the soon-to-be classic Forrest Whitaker.

Lost Highway (1997)
David Lynch is so scary. I don’t even ever want to watch this movie again. Considered a dud by many, Lost Highway is actually completely terrifying and weird. It has perhaps the scariest lines of dialog in movie history. Something to the effect of: I’ve been in your house. In fact, I’m there right now. Ugh. My skin is crawling just thinking about this movie, let’s move on.

The Big Lebowski (1998) or The Man Who Wasn’t There (2001)
At least the AFI put Fargo on their last list, but dropped it this time around. To quote Roger Ebert: “What? No Fargo?” To have Jaws, E.T., Raiders, Private Ryan, Schindler, and so forth all securely fastened to the current list is fine I gThman_who_wasnt_there_11uess, but if those choices meant excluding the Coens altogether, that is simply “laughable, man, bush league psycho-out stuff,” as Jesus Quintana once put it. Of course, I’m biased: a book I wrote with a friend on Lebowski has just been published by the British Film Institute. (The Barbican in London is showing a wonderful double bill of The Big Sleep and The Big Lebowski with our Introduction on July 15). You know what, though? At the risk of blasphemy: The Man Who Wasn’t There might be even better than Lebowski. It’s an extraordinary period piece crime genre bender and philosophical investigation of the basic noir tenets laid down by Double Indemnity. The Coens even filmed the execution scene that Billy Wilder notoriously shot at great expense but left out of the final cut.

Rushmore (1998)
This movie has probably influenced more younger writers than most living novelists and was largely responsible for the much-deserved Bill Murray rennaissance. Don’t mess with the love-life of a high school kid who knows how to train bees, I think that’s the basic message of the film.

Made (2001)
I know few people who have even heard about this hilarious mock-mob movie from Jon Favreau (who wrote Swingers), starring Favreau and Vince Vaughn, with great appearances by Peter Falk and the artist formerly known as P. Diddy. Basically it’s a story of two wanna-be gangers from L.A. who go to New York and get themselves into terrible trouble with real mobsters. If you like to laugh and especially if you love the mob genre this movie is a lesser-known gem.

Donnie Darko (2001)
Probably you are already a rabid fan of Donnie Darko or else you will never care. Supernatural giant bunny rabbits, circular time, Patrick Swayze as a pervert, and terrific scenes from life in an American high school circa 1980s.

Inside Man (2006)Photo_07_hires_3
Speaking of late styles, I love the movies Spike Lee is making these days, from subdued and classy thrillers like Clockers and Inside Man to the documentary experience of When the Levees Broke. Inside Man is just a damn-good suspense film with fine performances from Denzel Washington and Jodie Foster, about a bank heist in which the money in the vault isn’t actually the main prize.

A Scanner Darkly (2006)
I hear no end of complaints about this fine movie. I can’t imagine why. Richard Linklater gets fabulous performances from the out-of-favor crowd, Keanu Reeves, Robert Downey, Jr., and Winona Ryder, as well as another terrific strange version of the hardy perennial Woody Harrelson. The special effects thing, developed for Waking Life (also used in those Charles Schawb investment commercials), is a digitized painterly version of reality that is half-human and half-illustration or something like that. But Linklater’s greatest accomplishment is getting the dialog of Philip K. Dick, a great writer who had no idea how humans talk to each other, to sound hyper-realistic.



Below the Fold: The New Plessy versus Ferguson and White Privilege

Michael Blim

There is something odious about privilege. In this case, white privilege.

On June 28, the Supreme Court ruled in Parents Involved in Community Schools versus Seattle School District No.1 that using race as the sole criterion for assigning children to one elementary school or another violated the equal protection clause of the 14th Amendment to the Constitution. Chief Justice Roberts writing for the plurality of the Court set down their ruling is stark terms:

“Accepting racial balancing as a compelling state interest would justify the imposition of racial proportionality throughout American society, contrary to our repeated recognition that ‘at the heart of the Constitution’s guarantee of equal protection lies the simple command that the government must treat citizens as individuals, not as simply components of a racial, religious, sexual, or national class.’”

The racial classification of students in creating diverse public schools, Roberts argued, violates the landmark Brown versus Board of Education of Topeka, Kansas (1954) decision to require school districts, as the Court put it, “to achieve a system of determining admission to public schools on a nonracial basis.” (emphasis added by Roberts) Fifty-odd years of race-conscious remedies to provide African-Americans with equal educational opportunity, other than in cases of legislated de jure school segregation, have infringed upon the rights of each child to equal treatment under the law, whether he/she is black or white.

To mark the destruction of precedent, announce the end of an era spent searching for remedies to the historical disadvantages heaped on African-Americans during slavery and after, and perhaps to create himself a memorable, quotable line for the seven o’clock news, Roberts concluded his opinion for the Court with this exhortation: “The way to stop discrimination on the basis of race is to stop discrimination on the basis of race.”

In other words, just say no.

Justice John Paul Stevens, 87 years old, member of the Court for 32 years, and its elder statesman, spotted the slight of hand right away. How could Brown, a decision to remedy state discrimination against African-Americans, now be used against them in their quest for equal educational opportunity?

Justice Stevens put it this way:
“There is a cruel irony in THE CHIEF JUSTICE’s reliance on our decision in Brown versus the Board of Education. The first sentence in the concluding paragraph of his opinion states: ‘Before Brown, schoolchildren were told where they could and could not go to school based on the color of their skin.’ This sentence reminds me of Anatole France’s observation: ‘The majestic equality of the law forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.’ THE CHIEF JUSTICE fails to note that it was only black schoolchildren who were so ordered; indeed the history books do not tell stories of white children struggling to attend black schools. In this and other ways THE CHIEF JUSTICE rewrites the history of one of this Court’s most important decisions.”

Once more, as in Plessy versus Ferguson (1896), the Supreme Court has used the equal protection clause of the 14th Amendment against African-Americans for whom it was written and designed to protect.

Privileged people never see the connection between their power and the powerlessness of others. One could say charitably that it is too threatening to their virtue. One could also say less charitably that many just don’t care. It is a liability to care or to assume responsibility. It tarnishes their self-justification. It picks at the myths that they carefully pin like manifestos on the murals of public life.

So, as Justice Stevens noted, Robeerts et.al. traduced Brown to make new law. In fact, this decision is Plessy versus Ferguson in a rather shabby and ignoble Brown versus Board disguise. The only problem they might have with Plessy is that the Court at least recognized the intent of the 14th Amendment as a pledge to blacks as a disadvantaged class and as a guarantee of “absolute equality of the two races before the law,” while ruling that it was never intended to abolish the social distinctions between blacks and whites. If states wanted to create race-segregated public transportations, schools, and other public places, they could do so. The black plaintiffs, the Court argued, were unwarranted in believing that segregation “stamps the colored race with a badge of inferiority.” Moreover, the Plessy Court believed, the plaintiffs assume “that social prejudices may be overcome by legislation, and that equal rights cannot be secured to the negro except by an enforced commingling of the races…. Legislation is powerless to eradicate racial instincts or to abolish distinctions based upon physical differences…”

The Plessy Court concluded: “If the civil and political rights of both races be equal one cannot be inferior to the other civilly or politically. If one race be inferior to the other socially, the Constitution of the United States cannot put them upon the same plane…”

Justices Roberts, Alito, Scalia, and Thomas do not wish to recognize African-Americans as a class, and do not interpret the 14th Amendment as an explicit guarantee of African-American rights as a class in light of the historic deprival of their rights in slavery. Instead they construe the 14th Amendment solely as a guarantee of each person’s right to equal protection under the law. Thus, in the case before them, if a school district prefers a black child for admission to a public school because it seeks to integrate schools racially by administrative action, or in other instances seeks to re-assign black and white students to prevent their social isolation in all black schools, or seeks to create diverse student bodies under the belief that all students would benefit from the experience, the Court finds these kinds of actions impermissible. They violate the right of a white child to have access to educational resources that she or he seeks to enjoy because he or she is white.

It is a cleaner kill of the 14th Amendment’s guarantee of equal protection for African-Americans than was Plessy. African-Americans are no longer an historically disadvantaged group who lived almost 350 years as slaves and in renewed bondage under Jim Crow laws. After 388 years in America, only a minority of African-Americans since the mid-1960s have begun to live lives blessed by some measure of equal opportunity. For this Court, they have become individuals who happen to be black, one of a potentially infinite set of characteristics that defines each of them in distinctive ways. As such, their rights are no more important than those of any other persons.

Justice Thomas argues that the Constitution must be color-blind. “We are not social engineers… the Court does not sit to create an ‘inclusive society’ or to solve the problems of ‘troubled inner city schooling.’” Just as the majority in Plessy, he rejects the notion that African-Americans acquire a badge of inferiority when isolated from whites. In words that directly recall Plessy, he concludes that the Court cannot permit “measures to keep the races together and proscribe measures to keep the races apart.” He concludes: “the government may not make distinctions on the basis of race.”

Thomas takes Justice Harlan of the Plessy Court as his patron for a color-blind Constitution, quoting from Harlan’s Plessy dissent a rather odd declaration that he evidently finds supportive of his position. Justice Harlan in his peroration for a color-blind society says:

“The white race deems itself to be the dominant race in this country. And so it is, in prestige, in achievements, in education, in wealth and in power. So, I doubt not, it will continue to be for all time, if it remains true to its great heritage and holds fast to the principles of constitutional liberty. But in view of the constitution, in the eye of the law, there is in this country no superior, dominant, ruling class of citizens. There is no caste here. Our Constitution is color-blind, and neither knows nor tolerates classes among citizens. (words in italics omitted by Thomas)

This is an extraordinary comment to solicit an endorsement from Thomas. Here is Harlan, an apologist for white supremacy, denying disingenuously white dominance. Just as he refuses to recognize organized white rule, he refuses to acknowledge that African-Americans in 1896 were a caste. Color-blindness by Harlan is used here not simply as a dashing defense of individual equality before the law. In the context, it is an explicit rejection of the special status of African-Americans in the law, which even the Plessy majority accepted.

Justice Thomas and his colleagues follow Harlan’s reasoning precisely. Harlan’s white supremacist views are ignored doubtless as a recognition of the ignorance of the time.

Can Thomas and the other three justices be adjudged any less ignorant of their times? Can they be unaware of the desperate status of African-Americans today? Are they aware that as late as 1998, African-American family income was 49% of white income, and their net worth was 18% that of whites? Do they believe that 40 years of limited progress is enough compensation for 388 years of slavery and racism?

White privilege doesn’t allow these facts to come into evidence. After all, isn’t each African-American, and each American family for that matter, happy and healthy, or unhappy and unhealthy in its own way? To be so colorblind is to be so privileged that even facts are no enemy of your theory. You can actually deprive African-Americans of their rights again by ruling out of order any showing of their abundant economic and social inequality, and declare them theoretically equal in the eyes of the law.

This is the rhetorical trick of this insidious ruling.

Moreover, better to do it once and for all with a smashing opinion such as this one. Else, as Justice Roberts warned (and quoted above), people will start demanding racial proportionality throughout American life. That would put a bit of a crimp in the collar of white privilege.

In the case decided last Thursday, precedents didn’t matter. They stood Brown on its head. They converted the equal protection clause into another relatively harmless libertarian individual guarantee.

Characteristic of the far right wing in this period, they fall in with Margaret Thatcher: There is no society; there are only individuals. So use the law to strip out social missions from our institutions. Make the courts and the state simply night watchmen, there to protect property and haul off malefactors. In a game where whites start out more equal than others, if you are white, you have to like your odds.

And there is no telling what more this Court can do to pave your way.

3QD Interviews Craig Mello, Medicine Nobel Laureate

Harvey David Preisler died of cancer six years ago. He was a well-known scientist and cancer researcher himself. He was also my sister Azra's husband, and she wrote this about him here at 3QD:

Screenhunter_1_9Harvey grew up in Brooklyn and obtained his medical degree from the University of Rochester. He trained in Medicine at New York Hospitals, Cornell Medical Center, and in Medical Oncology at the National Cancer Institute. At the time of his death, he was the Director of the Cancer Institute at Rush University in Chicago and the Principal Investigator of a ten million dollar grant from the National Cancer Institute (NCI) to study and treat acute myeloid leukemias (AML), in addition to several other large grants which funded his research laboratory with approximately 25 scientists entirely devoted to basic and molecular research. He published extensively including more than 350 full-length papers in peer reviewed journals, 50 books and/or book chapters and approximately 400 abstracts.

A year after his death, my sister started an annual lecture in Harvey's memory which is usually delivered by a distinguished scientist or other intellectual. The first Harvey Preisler Memorial Lecture was given by Dr. Robert Gallo, co-discoverer of the HIV virus as the infectious agent responsible for AIDS. This year, the lecture featured the most-recent winner of the Nobel prize in Medicine, Dr. Craig Mello, codiscoverer of RNAi.

Screenhunter_27_jun_30_1321Dr. Mello grew up in the Virginia suburbs of Washington D.C. and graduated from Fairfax Highschool there. He went to Brown for college and later got a Ph.D. from Harvard University. He then also worked as a postdoctoral fellow with Dr. James Priess at the Fred Hutchinson Cancer Research Center in Seattle, Washinton. Dr. Mello now runs his own lab at the University of Massachusetts at Worcester. This is from Wikipedia:

In 2006, Mello and Fire received the Nobel Prize for work that began in 1998, when Mello and Fire along with their colleagues (SiQun Xu, Mary Montgomery, Stephen Kostas, Sam Driver) published a paper in the journal Nature detailing how tiny snippets of RNA fool the cell into destroying the gene's messenger RNA (mRNA) before it can produce a protein – effectively shutting specific genes down.

In the annual Howard Hughes Medical Institute Scientific Meeting held on November 13, 2006 in Ashburn, Virginia, Dr. Mello recounted the phone call that he received announcing that he had won the prize. He recalls that it was shortly after 4:30 am and he had just finished checking on his daughter, and returned to his bedroom. The phone rang (or rather the green light was blinking) and his wife told him not to answer, as it was a crank call. Upon questioning his wife, she revealed that it had rung while he was out of the room and someone was playing a bad joke on them by saying that he had won the Nobel prize. When he told her that they were actually announcing the Nobel prize winners on this very day, he said “her jaw dropped.” He answered the phone, and the voice on the other end told him to get dressed, and that in half an hour his life was about to change.

The Nobel citation, issued by Sweden's Karolinska Institute, said: “This year's Nobel Laureates have discovered a fundamental mechanism for controlling the flow of genetic information.”

[Photo shows Dr. Mello with Azra and Harvey's daughter, Sheherzad Preisler.]

Just before his lecture, I had a chance to sit down in his office with Dr. Mello and speak to him about his work. On behalf of 3QD and our readers, I would like to thank Dr. Mello for making the time to explain his discovery in some detail. Asad Raza videographed our conversation:

Baseball, Apple Pie, and Bathtub Gin

by Beth Ann Bovino

A sailing trip touring the Croatian islands in the Adriatic Sea began with a gift from family. The skipper, a Slovenian man, brought out a 2 liter soda bottle to celebrate our sail, saying “it was made by his cousin”. The label and color of the drink seemed like we were being served some Canada Dry. It was wine, and not bad at that. What began the journey lasted through the trip. We finished the wine, and turned to much stronger stuff. (I’m not sure which cousin made this one.) It was a bottle of clear liquid with some kind of twig in it, probably to try and cover the taste. Upon each harbor reached, and we traveled to many, we would take another swig.

The last night culminated with the purchase of some local wine. On the street, a guy was selling glasses of white and red, a 2 kunas each. We asked for a bottle. He gave our group samples, and asked for about 35 kunas or 7 euros, no negotiation. The ‘wine’ tasted like gasoline with a hint of apple. We walked off with our purchase and celebrated the find.

While savoring the wretched drink, the question became ‘what are we drinking?’ The answer, ‘Moonshine, of course.’

What is moonshine? What makes it legal here, if, indeed, it is? How is it made and what can it do besides taste really bad and get you drunk?

Moonshine, also called white lightning, bathtub gin, is homemade fermented alcohol, usually whiskey or rum. Moonshine is a word-wide phenomenon and is made in secret, to avoid high taxes or outright bans on the stuff. In most countries, it’s illegal.

In France, moonshining was tolerated up to the late 50’s, since having an ancestor who fought in Napoleon’s armies gave you the right. The right can no longer be transferred to the descendents.

In the Republic of Macedonia, moonshine is legal, and remains the liquor of choice, at leastaccording to wikipedia. In Russia and Poland it is illegal to manufacture moonshine, but the law against it is rarely enforced. A Polish woman I met told me about helping her granddad siphon off the flow at the age of 10, and recalled how her parents always had the a few bottles around to hand out as gifts or favors. She said back then, under Soviet rule, food rations were used to make the stuff; it was usually overlooked by the state. Home distilling is legal in Slovenia, a rare occurrence.

In the United States, it’s pretty clear-cut. It is illegal to make, sell, distribute, or be in possession of moonshine. However, like baseball and apple pie, most agree that it will always be around. It is tied to U.S. history in many ways. From the Whiskey Rebellion in 1794 to the Prohibition Era distillers and, now, the backwoods stills of Appalachia.

Shortly after the Revolution, the United States was struggling to pay for the expense of the long war. The solution was to tax whiskey. The American people, who had just gone to war to fight oppressive British taxes, were angry. The tax on whiskey incited the Whiskey Rebellion among frontier farmers in 1794. The rebellion was crushed, so many just built their own distillers to make their own, ignoring the federal tax.

Later on, states banned alcohol sales and consumption, encouraged moonshine. In 1920, nationwide Prohibition went into effect, to the boon of moonshiners. Suddenly, no legal alcohol was available, and the demand for moonshine shot through the roof. Easily able to increase their profits with losing business, moonshiners switched to cheaper, sugar-based or watered-down moonshine. Organized crime blossomed as speakeasies opened in every city.

When Prohibition was repealed in 1933, the market for moonshine grew thin. Later, cheaper, easily available, legal alcohol cut into business. Although moonshine continued to be a problem for federal authorities into the 1960s and ’70s, today, very few illegal alcohol cases are heard in the courts. In 1970, the Bureau of Alcohol, Tobacco and Firearms seized 5,228 stills, but from 1990 to 1995, only two stills were seized.

The law also chased down bootleggers, the smugglers who transport it and sell it. The name came from tall riding boots, where they would hide their product. Later, they raced cars packed with moonshine at night to avoid the police. They learned to drastically increase the horsepower of their vehicles to outrun the authorities. This created a culture of car lovers in the southern United States that eventually grew into the popular NASCAR racing series. The winner of the first ever NASCAR race, Glenn Dunnaway, had used the same car to make a bootleg run just a week earlier.

What about homebrewed beer and amateur winemaking? Since these activities are different from distilling alcohol, they were made legal in the 1970s. However, they can only be done in small quantities. So if you’re supplying half the bars in town with your “homebrew,” you might run into a few problems with the Feds. However, home distilling is definitely illegal in any amount. Since it’s too easy to make a mistake and create a harmful product, permits and licenses are required to ensure safety. That and the Feds want to get their tax money.

It’s All in the Mash

Crockpotstill_4

Corn is commonly used, though alcohol can actually be distilled from almost any kind of grain. http://www.unm.edu/~skolman/moonshine/history1.html Moonshiners during the Prohibition started using white sugar instead of corn meal to increase their profits, and the earliest makers supposedly used rye or barley.

The recipe for moonshine is simple:

corn meal

sugar

yeast

water

Sometimes, other ingredients are included to add flavor or ‘kick’. Once you make the mixture, called mash, you heat it for a bit of time in a still. The “Alaskan Bootleggers Bible” (Happy Mountain Publications, 2000) shows a number of stills, including the ‘two-dollar” still, using a crock pot and milk bottle.

“Drink Up, Before It Gets Dark”

Besides the several years it could land you in jail, (it could land you 5 years for moonshining, 15 years for money-laundering) what makes moonshine different from the whisky you find on the shelf at a liquor store? Aside from the obvious differences between something made in a sanitized production facility and something made at night in the woods, the primary difference is aging. When whisky comes out of the still, it looks like water. Moonshiners bottle it and sell it just like that (moonshiners’ code “It’s for selling, not drinking”). Commercial alcohols are aged for years in charred oak barrels, which gives them an amber or golden color to them. It also mellows the harsh taste. There’s no such mellowing with moonshine, which is why it has such “kick.”

There are a few other reasons why drinking moonshine can be risky. Since the whole point of making moonshine is to avoid the law, no FDA inspectors will be stopping by the backwoods at night still to check if moonshiners had wash their hands, and no one will be able ensure that all the ingredients are safe. It is not uncommon for insects or small animals to fall into the mash while it’s fermenting.

While a few furry creatures added to the mix wouldn’t likely kill anyone, you might have heard stories about people drinking moonshine and going blind — or even dying. These stories are true. During Prohibition, thousands of people died from drinking bad moonshine. I finished the sailing trip with both eyes intact. Though, one toast was “Drink up, before it gets dark!’

There isn’t anything inherently dangerous about moonshine when made properly. It is very strong alcohol with a very hard taste, or “kick,” because it hasn’t been aged. However, some distillers realized that part of the appeal of moonshine was that “kick” and experimented with different ingredients to add more kick to the drink, many poisonous, including manure, embalming fluid, bleach, rubbing alcohol and even paint thinner. Occasionally moonshine was deliberately mixed with industrial alcohol-containing products, including methanol and other substances to produce denatured alcohol. Results are toxic, with methanol easily capable of causing blindness and death.

Besides poisonous ingredients, there are other manufacturing mistakes can poison moonshine. For example, only one pass through the still may not be enough to remove all the impurities from the alcohol and create a safe batch. If the still is too hot, more than alcohol can boil off and ultimately condense — meaning more than alcohol makes it into the finished product. Either can result in a poisonous drink. While it seems exciting to try (I was fascinated by the booze and I own a crockpot), chances are you end up with pure poison. Moonshiners often die young. If they don’t go blind, or the Feds get them. Better to just walk down to your corner-store for that bottle of Jack Daniels and soda.

Making Sense of Good Coffee

by Daniel Humphries

Coffee_2On May 29, at around 6:30 pm Eastern time, the program that runs the Best of Panama auction website stopped functioning unexpectedly. The bid price for this year’s top Panamanian coffee was stuck on $99.99 per pound. The program had not been designed to progress beyond that mark. Fifty dollars a pound was by far the most that had ever been paid for a pound of green coffee — and that price had been achieved or approached just a handful of times. Hacienda la Esmeralda, the prize-winning farm, had been producing astounding coffee for years, using meticulous and innovative growing, harvesting and processing techniques. Their famed gesha varietal, an heirloom plant from Ethiopia, created mind-blowing coffee, incredibly smooth and complex with the sweetness and citrus of an orange right off the tree. I tried the Esmeralda crop in 2005 and again in 2006. I remember remarking in ’05 that not only was it the best coffee I had ever tasted, it might have been the best anything I had ever tasted. Back in May, when the auction finally got up and running again after a three hour delay, the price went all the way to $130 a pound.

Does that sound absurd?

Anyone who tells you they fully understand the coffee trade is lying. Coffee, one of the most ubiquitous products of the modern world, comes from a thousand different places, and ends up in ten thousand more. People drink it as turkish coffee, cowboy coffee, or café crème. There are as many different ways to drink coffee as there are distinct cultures on Earth. This is well known.

What surprises many people is that there is also an incredible variety of different flavors and aromas possible in the bean itself, not just the mode of preparation. The blackened, over-roasted stuff, crying out for cream and sugar, that most people in the West have access to doesn’t suggest that coffee is a delicate agricultural product, sensitive to time and place. But it is, and resoundingly so.

I grew up in a rainy, northern town. The muscle tissue of every good citizen was saturated with coffee, usually from a can, prepared using a plastic-body “Mr. Coffee” (or similar) elecric drip machine. When the Millstone and Seattle’s Best brand coffee bins, with their whole beans bearing exotic names and their grind-on-the-spot machines, began appearing in the local grocery stores, it seemed like a glorious new age of consumer choice. It was a nagging question as to why some of the coffee had European designation (“French” roast, “Vienna” roast) and some Indonesian (“Java,” “Sumatra”), and others were even more inscrutable (“Midnight Blend,” anyone?). What were these designations meant to tell us? Where the coffee was grown, or roasted? Or perhaps something else entirely? These questions one shoved to the back of one’s mind, though.

My family’s choice was “French” roast. We thought it very unique at the time, our own little clan-defining consumer choice. Actually, dark or “French” roasts are exceedingly popular in the United States. There is a reason for this. And that one little fact speaks surprising volumes about the state of the global coffee trade.

In dollar terms, coffee is the second most-actively traded commodity in the world, after petroleum. This tells us it’s hugely popular, obviously. But it also tells us something else: coffee is a commodity. That is, it is traded —bought and sold on the international market— just as if it were gold, or crude oil. On Wall Street, no distinction is made between coffee grown in Honduras and coffee grown in India.

A staggering amount of coffee (and coffee abstracted in the form of capital) is changing hands, but much of it is controlled by just four companies (the “Big Four”: Nestle, Kraft, Sara Lee, and Procter & Gamble). Traditional, mountainside farmers in Kenya or Guatemala are forced to compete with huge conglomerate plantations in Brazil. In the 1990s, the global free trade regime convinced the government of Vietnam to replace enormous stretches of rice paddies with large-scale coffee plantations. Cheap, low-quality coffee flooded the market, driving already dismal prices into a tailspin, literally starving small farmers into switching crops (often to cocaine or khat).

Unfortunately, the process of Fair Trade certification is not the answer. The Fair Trade system quite admirably seeks to raise the prices that farmers get. It is certainly a step above the commodities market and I do not wish to denigrate the work done by TransFair USA. But ironically it ultimately traps farmers into the same system: coffee is treated as an undifferentiated commodity to be exported, like aluminum or diamonds, only with a slightly fairer price. Organic certification is also problematic as an indicator of an ethical product, though again I respect the work done by organic farmers and certifiers. Most people now understand that “organic“ does not necessarily mean small-scale or higher-quality or even fairer labor practices. It would be easier (though less fun and less personally enriching) to simply buy beans with the proper stickers slapped on it. But it takes a bit more thought, and in the end that is a good thing.

It does not have to be like this. The taste of a given coffee is enormously sensitive to how it was grown, when and how selectively it was harvested, how it was washed and processed, how it was stored, how it was shipped. If you want to preserve the incredible beauty of a unique coffee, every step along the way must be undertaken with great care and diligence. For instance, a naturally sweet cup of coffee (no sugar needed) is the hallmark of beans that were harvested selectively, only when each individual cherry was ripe. It’s a mind-changing experience to drink, but until recently, very few people have had the opportunity to try it.

Descriptions of the flavors of beans from various parts of the world can raise a few skeptical eyebrows from people accustomed to bad coffee (that is, most people) or elicit snide comments about dainty epicureanism. But once one tastes the coffee in question, especially side-by-side with something radically different, all suspicions are allayed. Did you know that the volcanic soil of El Salvador gives the coffee there the sharp and pleasing tang of iron? Or that the Yirgacheffe region of Ethiopia produces delicate, lemony coffees while the Harrar region is famed for pungent berry flavors? And among those that care about such things, there is a spirited, friendly debate about whether the unmistakeable wet-earth taste of Sumatran coffee, a byproduct of peculiar local processing, is a flaw that masks the beans true taste or a delightful, unique trait to be preserved.

Commodities markets are indifferent to all this. Coffee is coffee, whether it tastes like liquid mold or liquid gold. So the quality of beans that end up in, for example, the United States, is highly variable. This is where dark roasting comes into play. Dark-roasting coffee is like stewing meat. If you have a prime cut of free-range, grass-fed beef, you can pan-sear it for a few minutes and end up with a divine steak dinner. If all you have is gristle and it’s starting to go bad, you can just stew it for hours to make it palatable. Both are technically the same thing (cow meat), but, of course, there’s really no comparison.

Similarly, over-roasting coffee is a way of hiding the flaws in your coffee. If it was carelessly harvested, processed sloppily, and sat in a steamy tropical port city for way too long before being shipped, it’s going to taste bad: sour, musty, and literally like dirt. But if you then roast that coffee black as coal smoke, it will taste sort-of adequate in a smeared-palate way: smoky, bitter and maybe, if you are lucky, a bit caramelly. (I have simplified the case here a bit. A skilled artisan roaster can actually produce a lovely, controlled dark roast using high-quality beans. You may rest assured, however, that this is the exception to the rule.)

The taste of coffee is also highly dependent on how it’s prepared. So even a very fine coffee, properly roasted, can taste terrible if someone pulls a 60-second shot of espresso with it (as opposed to a more skillful 28-second shot, for example). With all these variables, it’s no wonder many people prefer the darkest possible roast then combined with the most possible milk and sugar (or Splenda, or cream, or creamer, or soy, or vanilla syrup, or frappuccino powder or anything to mask the fundamental nastiness of the beans).

The great news is that truly good coffee is eminently accessible to people living in the West today. For the end consumer, it doesn’t cost appreciably more than the low-grade stuff, and it’s often considerably cheaper than the medium-grade stuff passed off as “gourmet” at the chain stores (or in those supermarket bins). Because the phalanx of faceless commodities-market middlemen are cut out of the equation, the farmers receive a much greater portion of the final sale price, and the whole thing, from field to cup, is done on a more human scale.

More and more coffee in this mode — carefully produced, ethically sourced, fresh, and delicious — is reaching our shores every year, and more baristas and roasters are learning how to skillfully prepare it. It can be hard to imagine that this movement will become more than a footnote to the monstrous global coffee trade. Especially as the big companies begin to take notice and start to parrot the language of artisinal coffee without changing what they do, people are understandably very confused about what they are buying. But we have yet to even approach anything like a ceiling for how far we can take it. Perhaps skepticism about how big a difference it makes is rooted more in a failure of the imagination than in the supposedly ironclad laws of capital. And if you pay more attention to taste than to packaging or verbiage or stickers, you are off on the right foot. Ultimately, it is still only a consumer choice, if we choose to treat ourselves to better coffee. I can’t pretend it’s anything else. Of course, it’s also a consumer choice to drink bad, commodity-style coffee, or to remain in the dark about the difference.

La Esmeraldas jaw-dropping $130/pound is clearly an anomaly. Certainly the name cachet (and quite possibly the sheer, giddy lunacy of the moment) drove up the price. I dont think anyone believes it is literally dozens of times more delicious than, for instance, Panamas second-place coffee. But it was a watershed moment for growers around the world: a recognition of the worth of skill and dedication. A price that high is not sustainable, even for the wealthy West. But it is a beacon of what is possible. Coffee farmers deserve far more than they have received in the past, and they are beginning to get it. To continue this promising trend though, people must come to view their coffee in a new light, not as undifferentiated rocket fuel, but as what it is: a unique and ever-changing product of specific places, specific plants, and specific hands that work the soil.

For a good primer on how to find, purchase and prepare great coffee, try this series by tonx on dethroner. People in the New York region may be interested in the New York Coffee Society.

Daniel Humphries is a professional barista trainer and coffee sommelier living in New York City. His homepage is here.

‘The Speewah Ballad’

Australian poet and author Peter Nicholson writes 3 Quarks Daily’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Since one should always take art seriously (Duchamp, anyone?), there is always the danger of then taking yourself terribly seriously too. Therein lies the error. You must laugh at yourself and the world. That is essential. Laughter is, as is said, the best medicine.

I guess I’ve seen the episode of Seinfeld where Elaine thinks she’s contracted rabies several times now, yet it still makes me laugh out loud. ‘I don’t need you to tell me what I don’t want, you stupid, hipster doofus!’ she barks at Kramer when he tells her that she doesn’t want to get rabies because it can be fatal. Please don’t give me any guff about being ‘oppressed’ by American cultural product. I’m quite capable of recognising Elmer Gantry when he comes in through the door, or the screen. There may not be any laughs when listening to Mahler, and you unlikely to guffaw in the middle of Kafka, but art is contradictory in that way. Often, in implausible places, laughter can get hold of you and bring you haughty stares. Getting overcome by a sense of the ridiculous at some earnest art installation; Queen of the Night journalists who think they are remedying, rather than discussing, complex problems with their columns; some chefs, having mistaken themselves for artists, making a giant fuss about meals you wouldn’t feed to a brown dog; fashionista stick insects dressed in clothes that might have been lifted from an alley bin—everyone could go on to make their own list. We need satirists to show us our foolishness, nowhere more so than in our political certainties or lifestyle choices. For example, there seems to be a new fashion amongst some for going up into space with astronauts, or getting ‘buried’ in space. Can’t you see the comic possibilities here. ‘I’d like to walk on Mars on your next expedition.’ ‘O.K. That’ll be 50 gazillion dollars thanks.’  Having done a comprehensive job of wrecking everything on Earth, manimal goes forth in his/her quest for future dominance. What a prospect. Let’s hope the newly-discovered earth ‘double’ isn’t too close for comfort. 

Have a look at Uncyclopedia—the ‘content-free’ encyclopaedia—sometime, the parody of Wikipedia. You encounter some very politically-incorrect writing, but we’re all grown up enough to get past that, aren’t we. Try the article on the slate industry in Wales. If that doesn’t give you a laugh, not much will, though humour is, like everything else, a matter of taste.   

Australian humour tends to be cynical of established orders or of anyone who is seen as getting above themself, which has both positive and negative aspects. The larrikin spirit, defined in the convict, colonial years, has had many a poetic devolution in present times. Here is a poem that parodies, not unkindly I hope, the bush ballad tradition, made famous by ‘Banjo’ Paterson, among others, with its sturdy carapace, perhaps predictable rhythms and thumping rhymes. You can’t expect the fine shadings or metaphysical heft of a Rilke or Milosz in verse like this, but the Australian ballad form can be enjoyed on its own terms, reflecting, beneath the broiling and sometimes mannered surface, shadows, ambiguities. Speewah refers to an imaginary outback station, what Americans would call a ranch. 

                                                              *

                       The Speewah Ballad

It had come to my attention in the local boozers’ pub
   That my yarns were getting hoary and my wit had missed the mark,
They were getting tired of hearing all my macho, matey turns
   And wanted something different to raise their spirits as they worked
For bosses who looked down on them and found their habits slack;
   My name is Terry Overall—my humour’s pretty black!

One chap, old Stubby Collins, had touched me for a shout,
   Said, Now get on, you young galoot, your tales are up the spout.
But as I rolled on home that night reviewing what I’d told
   I thought that I was really quite a sentimental cove
And Collins was right up himself, for who was he to tell
   My stories couldn’t bore them least of all in that hotel.

I left the bar round tennish, walked past the closed-up shops,
   Called out Debbie’s name when home and cursed the booze bus cops
Who took away my licence—I’d only knocked down two—
   Because one evening after work I’d got into a blue;
One’s in a wheelchair now, he’s great, his splints are off his legs,
   The others got a compo cheque but can’t remember facts.

Debbie, I call out once again, and then I see a note
   Left on the kitchen sink beside a half-drunk can of Coke:
I’ve left you Terry, you’re no good, you’ve bashed me up too much.
   I’ve got the kids and I’m sure glad I’ve left this dump at last.

Well I’ll be blowed, I cogitate and scratch my sweating brow,
   I never was much keen on that two-timing Goddam cow.

Then down I sit and exercise my mind on lots of things,
   There’s rubbish on the unmown grass and murder in the wind,
But what’s the good of hitching ’bout a woman who’s like that;
   I’ll go and visit Micky out on Speewah’s lambing flats.
He said he’d like to see me last time he was in to town,
   He’s a lark, this mate of mine, a bonzer, strapping clown.

He almost drowned at Bondi once when we were at the beach.
   He swam way out, then got cramp, was almost out of reach,
When in the nick of time a surfer helped him back to shore;
   Better than a shark, he said, but hell my guts are sore.
He always sees the bright side, even off his scrawny pins—
   I’d told him not to eat those greasy, cold dim sims.

Well anyhow next morning I packed my bag to go,
   Made sure the ute was ready for a thousand miles or so,
Rang the boss and asked for leave, I told him I was sick—
   I work down at the abattoir, Jesus it’s the pits.
He wasn’t pleased but when I told him what had really happened
   He softened up and told me that I really must feel flattened.

And so I left the suburbs—my place is in the west—
   You need a car to get about, it’s hot, you get depressed.
I feel much better knowing that I’m leaving for a while
   (The house is still unpainted, the yard looks like a sty),
And now the wife has left me I think I’ll chuck the lot,
   Leave the place as well as get myself another job.

Soon the city vanished as I shot off down the highway,
   The road was beaut to drive on, you could speed down there quite safely;
I gave a few slow motorists a scare or two at times—
   Without the licence handy I still avoid the speeding fines!
The coppers never touched me, I was lucky to escape
   Their nosy parker checking of bald tyres and number plates.

After an hour of travelling I picked a hiker up
   Who said he was off to Melbourne for a talk on a chap called Bart—
It didn’t make much sense to me, but passed the time of day.
   He was full of himself, this fellow, I thought he could’ve been gay,
But later on a female hiker came into our view;
   We stopped for her and he soon implied that it might be nice to smooch.

She took up the offer quickly, her hands were on his crutch,
   And soon I had to stop the ute because of all their thrust.
I let them out on the grass beside the highway’s steaming tar;
   They finished off their business then while flies about them buzzed.
At last they hitched their jeans back up and brushed the ants away,
   Leaning by me tiredly as the miles blurred into haze.

It wasn’t what I’d planned of course, and I felt pretty slack,
   This trip was getting stranger and time felt out of whack.
At last I reached the turnoff and I had to wake them up—
   Sorry, turning off here. Hope you’ll have some better luck,
And left them thumbing lazily beside a dusty corner,
   Making off for Speewah as the afternoon drew nearer.

Then suddenly I thought, The dog! My God I haven’t got the dog!,
   I’d left him in the yard at home, the poor old pooch, poor Trog;
The neighbours will look after him and give him cans of Pal,
   I hope he doesn’t bark all night and give the whole street hell.
That dog is worth a dozen Debbies, so much better than the missus,
   That if I had to choose between them, sure as eggs, he’d come up roses.

Now as I travelled westward the weather grew quite blustery,
   The sun shone in my bloodshot eyes, the road became real uppity.
Then all at once the countryside seemed different and remote—
   The east had had a bit of green but now the land seemed broke,
Dead branches lay beside the road and bones were everywhere;
   I wasn’t one to worry but this country had me scared.

Last time I’d gone to Speewah I had come another way,
   But that was several years ago in summer, Christmas Day.
Now it’s late in autumn and the days are so much shorter
   You wouldn’t think the place the same—maybe I’m just getting older,
Though I’m only thirty-three and still got all my marbles;
   This time it sure seemed different as the distant thunder rumbled.

Soon I was low on petrol and the sky was getting darker
   And so I kept a lookout for a garage or a parked car
From which to siphon off a bit in case I couldn’t find
   Out on this lonely country road a service station sign,
But strike me lucky, there was one not half a mile ahead
   Set just beneath a ring of hills whose sides around me reared.

I pulled into the bowser and got out to stretch my legs,
   A cold wind stirred the eucalypts as blackness round me spread,
I looked about in hope of finding someone who would fill
   My ute with oil and petrol so that I could cross those hills.
Then out of the blue a hand descended, gripping me on the  shoulder,
   And when I turned my stomach churned and through me went a shudder.

Before me stood a shrunken form in khaki dungarees,
   With hollow face and staring eyes, he seemed to be diseased,
But he was just a loner, not complex or a pain—
   West of the Great Dividing Range that sort of bloke remains
What city folk are wary of, though country types are sure
   They’ve got it over big smoke types—they tell it through the year.

Of course back in the city people couldn’t give a damn,
   As long as the fridge is chocka, then bugger-all the farm.
Their usual way of spending time is spending money freely
   On objects that technology deems right for yuppie needies
Distinguished for their empty chat at groaning restaurant tables,
   Whinging through three courses on the subject of tax rebates.

Read more »

Monday, June 25, 2007

Sandlines: Pygmies in the Hegelian Vortex

Edward B. Rackley

On my first day back in Kinshasa I met Mr. Kapupu Mutimanwa, self-appointed leader of Pygmy peoples in DR Congo. Unicef arranged our meeting in their high-rise offices in the bustling and congested center of town.

Kapupu was returning from an international forum of Pygmy peoples (1) organized in the forest outside of Impfondo, a remote town in the northern region of the Republic of Congo. Brazzaville, its capital city, lay across the mammoth Congo River, visible from where I sat waiting for Kapupu.

The forum convened pygmy representatives from eight countries in the region to address land access rights in the face of expanding agro-forestry and mining industries. I hoped Kapupu would brief me on what this meant for the Pygmy groups I would be visiting in Equateur, DR Congo’s northwestern province hundreds of miles upriver from Kinshasa.

Les freres de Kapupu

A call to a Unicef assistant informed us that Kapupu was “empeché au port” – stuck in the web of bureaucratic process after taking the barge from Brazzaville earlier in the day. I looked around the office. Posters instructing mothers to vaccinate their children decorated the walls. The surface of the faux-wooden desk where I sat, otherwise new and unblemished, was marked by deep circular scratches. A hole the size of a car tire was visible in a lower corner of the floor-to-ceiling plate glass window gave onto the streets below, and to Brazzaville across the river.

The Unicef assistant smiled and explained that during city battles the month before, a mortar crashed through the window, bounced off the floor and onto the desk. There it spun in circles, but failed to detonate. Staff were hunkered down in office corridors for two days, waiting for the fighting to subside. Several employees in a bank three floors below were killed by stray bullets and mortars; Unicef personnel escaped uninjured. Hostilities are still virulent following the presidential elections in late 2006.

Kapupu arrives towards the end of our scheduled meeting. He enters embarrassed but smiling, and extends his hand. “Papa,” he calls me, and excuses his late arrival. He proudly wears a suit and tie, his small frame swallowed by an oversized neck collar and chunky cufflinks.

As we sit down, Kapupu recounts his personal journey as the first Congolese Pygmy to graduate from university, the first Pygmy to meet former president Mobutu, the first to travel abroad to visit indigenous groups in Latin America, the first to win grants from the European Union to organize fora like that of Impfondo. He hails from South Kivu, a province in eastern Congo. He has never visited Equateur, it turns out.

Kapupu reiterates his intention to bring all Congolese Pygmy groups under his leadership. I infer that the power to represent Pygmies to the wider world is not easily won. I learn nothing of the situation in Equateur, except that Pygmies there are all “les freres de Kapupu” (Kapupu’s brothers).

After an hour of Kapupu monologue, Unicef calls the meeting to a close. As he leaves, the real reason for his visit becomes clear. “Whenever the white man appears to help Pygmies,” he says to no one in particular, “there is more suffering.”

“So self-imposed exile is the solution?” I ask him. He then backtracks, apparently not having thought through the implications of such a position.

With his charisma and masked desperation, Kapupu was unlike any Pygmy I had ever met. His style and demeanor reminded me more of Congo’s civil servants, an army of low-level administrators scattered throughout the country’s forgotten interior. With no link to the febrile Kinshasa government, Congo’s provincial bureaucrats—all charlatans like Kapupu—fashion their leadership from pure chutzpah and enchantment with their own spectacle. Easily intimated and gullible, impoverished illiterate rural populations submit to these neo-feudal overlords without question. Lord of the Flies in flesh and blood.

Fear and Loathing

I was accompanied on this trip by Benani Nkumu, an educated Pygmy from southern Equateur, who contrasted with Kapupu in every imaginable way. Unlike Kapupu, Benani is not interested in the politics of redress for indigenous peoples. His work as an effective community mobilizer introduced him to Unicef, for whom he serves as a kind of interface between rural Pygmy groups in the region and Unicef development programs. Benani harbors no victim complex, and though he fears the Bantu (2), in their presence he is neither vindictive nor overcome by insecurity as are other Pygmies. His recurring anxiety, he told me over palm wine one afternoon, stems from the envy of his ‘confreres’ or brethren.

Jealousy, ressentiment and Schadenfreude permeate intra-Pygmy relations; they are equally pervasive in rural Bantu society. An ambition to improve one’s lot, be it for personal or collective gain, is suspect and is discouraged through a variety of means. Theft, explicit threat and black magic are common ways that pioneering spirits like Benani are intimidated. Because the Pygmy status quo is also its least common denominator (“stay poor, indentured and disenfranchised like the rest of us, or else”), Benani fears reprisal. Poisoning is his biggest worry, and he never leaves his glass unattended during drinking sessions with other Pygmies. Pygmies and Bantu refuse to eat or drink together, and do not intermarry.

At one point in our journey after a long day of interviews and slow progress over muddy overgrown tracks, we got lost. It was late at night, and pitch black with no moon. Villagers ran out into our path yelling that the road ahead was not passable. We stopped the jeep, deciding it best to continue the next morning with the benefit of daylight. None of our party knew anyone in the village, or exactly where we were. Myself and the driver got out of the car and introduced ourselves to the villagers, explaining why we were lost and where we were headed. There was no food, they said, but we were welcome to sleep with them. Some palm wine appeared, and we sat down in the dark to chat and rest.

All this while, Benani and another Pygmy we picked up along the way, Pastor Linganga, remained in the car. I opened the door to ask if they planned to sleep in the car or to join us outside, and noticed immediately from their body language that they were afraid and uncomfortable with the turn of events. I said that I thought our new hosts were good people, and that they would be welcome here. Soon we were all drinking and talking comfortably, with no elephant in the room.

Embourbe

We slept a few hours and at first light, we packed and drove into the forest. Our host family had organized a large party to follow us on foot, to help in case we got stuck, which happened almost immediately. After a brief negotiation, a digging team of Pygmy and Bantu went into action, and by 6pm that evening the jeep was free and back in the village. We would try a different route the next morning. The experience proved to me that with a financial incentive, Pygmy and Bantu could work together as equals, and share the dividends.

“My Pygmies”

While preparing for this trip I picked up The Forest People by Colin Turnbull, which I first read twenty years ago. It chronicles the lives and personalities of a small band of hunter-gatherers in the Ituri Forest of the country’s northeastern quadrant. Published in 1961, it is still a pleasant read, pre-dating the turn of anthropology’s gaze upon itself—where “watching the watcher” displaces exploration of an alien world as the primary analytic activity.

In the last two weeks of visiting Pygmy settlements along the rivers and long-abandoned roads within the isolated interior of Equateur province, one of Turnbull’s phrases surfaced in my mind. Turnbull describes the mixture of mistrust and awe with which the sedentary Bantu tribes regard their Pygmy neighbors as “a loathing, born of a secret fear.” For the Bantu, the Pygmy represents an “exotic other.” Coming from the forest—the abode of spirits good and evil—Pygmies are exceptionally skilled hunters, their women coveted, their knowledge of medicinal herbs and roots vast, and they have believed to possess spiritual powers and connections to nature that the Bantu lack. Because of these differences, they are judged as unhygienic, hard drinkers, unpredictable and ill-disciplined. Xenophobia is a classic identity enhancer; all peoples do it to some degree.

Unlike other countries where indigenous groups are marginalized and excluded, land and forests are plentiful here. Access to land is arbitrarily controlled by Bantu groups, and while Pygmies here are largely sedentary, their subsistence farming is limited to small plots. To survive, they clear, sow, maintain and harvest fields for their Bantu overlords, for which they receive less than 50 cents a day. Bantu families “own” one Pygmy family or more, who besides working in their fields, fetch water and firewood, clean their homes and sweep their courtyards. In all our interviews with Bantu chiefs, priests, community leaders and ordinary folk, each referred to these day laborers as “my Pygmies.”

Bottom of the hierarchy

Apart from the cultural component of discrimination and quasi-enslavement, there is a structural element to the violence and inequality inflicted on Pygmies. Elsewhere in Central Africa, Pygmy civil society and activist groups tend to argue for redress and entitlement on the basis of “historical precedence” (they were here first), and in some cases “cultural genocide” (as their livelihood and traditional lands are threatened). Unicef works with Pygmies across this region, but does not support these arguments or fund activist groups pursuing these angles of argument.

Instead, Unicef situates the Pygmy predicament within the context of their systematic discrimination and marginalization by the Bantu majority, who determine the contemporary social, political and economic conditions. Its aim is to promote the development of all by focusing on the most vulnerable—Pygmies in this case.

Visage

As I traveled with Benani, I often wondered about the historical, factual origins of the current situation. Extant literature is not particularly helpful, although theories about the origins of human inequality abound. In these, Pygmies continue to be the subject of ‘noble savage’ fantasies à la Rousseau. Turnbull’s experience with the Mbuti was clearly infused with this sentiment. Levi-Strauss’ equally popular study of hunter gatherers in the Amazon, Tristes Tropiques, did not romanticize their existence. In an earlier piece for 3QD, I considered the Jared Diamond hypothesis and its relevance to Pygmy marginalization.

“By accident of their geographic location,” Diamond writes, societies either inherit or develop food production capacities that in turn facilitate population density, germs, political organization, technology, and other “ingredients of power.”

The Diamond thesis illustrates one way in which Bantu peoples have been able to populate a much wider area than the original Pygmy inhabitants, outnumbering them and ultimately dominating them. Of the animals or edible plants indigenous to the so-called Congo River Basin (DRC, ROC and CAR), none are among those domesticated and cultivated by Bantu. In the forest, nomadic Pygmies survive off of wild plants and animals that are resistant to regular cultivation as crops and domestication as livestock. Hunting/gathering not only precludes an economy of surplus, because it is motivated by immediate consumption, but it also limits the geographic range in which Pygmies can live without undertaking a radical shift in their primary mode of subsistence.

Bantu can take their crops and livestock wherever there is plentiful water and arable land, which includes forest areas used by Pygmies. As the Bantu demographic saturates a newly settled area, Pygmy domains are ‘colonized’—a common sentiment among Pygmy leaders we met during the visit. All felt that while the Bantu were now independent with the retreat of the European colonial regime, Pygmies remained colonized by their Bantu ‘masters’. The majority of Pygmies we met were sedentarized, but did not farm for themselves. Instead they worked as day laborers for the Bantu.

Given the master/slave dynamic of Bantu-Pygmy in this part of DR Congo, Diamond’s thesis lacks a key causal element behind the dynamic. This is the comparative advantage that colonialism afforded the Bantu, being already settled and accessible to outsiders, while Pygmies were still mobile in the forest. As such, Pygmies were largely inaccessible to the colonial administration’s ‘civilizing mission’.

Colonialism brought new economic and political structures that reinforced the power of sedentary agricultural peoples over herders, hunters and gatherers. During colonial rule, agricultural peoples had easier, if quite limited, access to education, health care and other social services that were almost completely denied to indigenous communities.

Colonialism thus made it easier for Bantu to access the state apparatus. When colonialism ended, it was Bantu educated elites that took over the institutions of political and social power. At the bottom of the post-colonial hierarchy were nomadic hunters and gatherers. Congolese Pygmies have had to play ‘catch up’ ever since. Indeed, they have nowhere to go but up.

====

1 The term ‘Pygmy’ is used here as adopted by indigenous activists and support organizations to encompass the different groups of central African forest hunter-gatherers and former hunter-gatherers. Sometimes used pejoratively, here the term is used to distinguish them from other ethnic groups who may also live in forests, but who are more reliant on farming, and who are economically and politically dominant. The Pygmy groups covered in this study include the Tua and the Lumbe.

2 A term conventionally used for settled farming peoples, although these groups include Oubangian and Sudanic language speakers as well as Bantu language speakers. In the southern Equateur province of DRC where this trip took place, the primary Bantu groups using Pygmy labor and whose discriminatory practices form the object of this study are the Nkundo and the Mongo.

Selected Minor Works: Hipsters, Prepare to Die

Justin E. H. Smith

O who could have foretold
That the heart grows old?

–W. B. Yeats, “ A Song”

I am a salaried functionary and a family man.  I long for peace and quiet and a good night’s sleep, and I wear whatever my wife tells me to wear.  At this point I no more belong in Williamsburg than I do in Sadr City.  I send none of the signals that would assure the natives of my right to be in either place. 

Just yesterday things were quite otherwise, at least as far as Williamsburg is concerned, and I attribute the changes not to will but entirely to necessity. Physiologically, I simply did not have the luxury of extending my membership in metropolitan youth subculture indefinitely.  My temples went grey, my body shape changed, and college students started calling me ‘sir’ at an age when I was still holding out the hope of being invited to their parties.  In large measure it was unfavorable genes that forced me out of what would otherwise have been a life of unrepentant hipsterism.

By ‘hipsters,’ I mean the youth in the developed world who construct their social identity primarily in opposition to the prevailing sensibilities of the age, without however conceiving this opposition as political.  On a global scale, hipsters seem to have emerged out of the Reagan-Thatcher years in those countries that earlier witnessed the cultural shift known in Western Europe as “’68” and in the US more broadly as “the sixties.”  (To some extent, the origins of the new form of opposition can be found in the sixties themselves, from French situationism to Abbie Hoffman’s advocacy of ‘revolution for the hell of it’, but the prevailing ideals of that era remained serious ones.)  The complete account of hipsterism’s emergence out of the ruins of 1960s utopianism is beyond our scope here, yet the genealogical link is clear: where sex, drugs, and rock and roll were not a principal cause of historical change, where instead the youth were contending with wars, dictatorships, and real –government-imposed– cultural revolution, today there is little or no hipsterism.  Today you will see stencils of Mr. T (or whomever; you get the idea) spray-painted on the walls of London and Amsterdam, but not Bucharest. 

For hipsters, prevailing ideas and values are not necessarily oppressive, just stupid; not necessarily worthy of anger, just ridicule.  (They generally focus on cultural output from the recent past, for reasons we have yet to consider.) Thus for example hipsterism encourages its adherents to propose, in writing, on their t-shirts, to sell moustache rides for five cents, not because they intend to give anyone a moustache ride, and not even because the apposition of ‘moustache’ and ‘ride’ is seen as a source of humor.  What is humorous is that in some imagined Country Comfort Lounge in Amarillo or Cheyenne a generation ago some big slab of a man actually sported a moustache of which he was proud, which he believed could function directly and un-ironically as a sexual attractant.

In Bucharest in contrast you will see t-shirts bearing the following messages: “Action Product Girl,” “Ultimate Outback All-Star Crew,” “Surfing Life-Style #1: O-Yes!” You will see the suggestive “Varsity Marine: Red Bum’s Up in Seemans Quarter,” the poetic “Rebellion Speed Inside Energy World’s,” and, my personal favorite, “Fertile Enclosure Fashion 56.”  Have there, I wonder, been any sociolinguistic studies of these English-sounding strings of words?  Clearly, they are generated and displayed in part out of a simple fetish for the sterling-standard idiom of the era of globalization.  But for the most part I suspect there is no intentionality at all behind them. These words are not bearers of meaning; they are strictly decorative. Whether I am right about this or not, one thing is clear: one does not wear such t-shirts as a joke.  They either convey nothing at all, or, to the extent that the message is understood by the wearer, they convey an earnest wish to say something serious about oneself: ‘I am an Action Product Girl,’ ‘I participate in the Surfing Life-Style.’  They are a world away from the “moustache rides” message.  They are the product of a different history and a different logic.

But why is hipster ridicule directed at the cultural output of a generation ago?  Why is irony focused upon the recent past?  Contrary to some facetious fears that the retro gap is closing, and that soon we will be celebrating for its ironic value the cultural output of this very day, in fact it seems that the ironic focus is eternally fixed upon the detritus that was floating about right around the time of one’s own origins, the things that could help to explain how one came to be at all, including the invitation to a moustache ride that just might have led to one’s own conception.

Hipster irony is at bottom a preoccupation with the problem of origins, and as I have said the portion of one’s life one can appropriately devote to hipster irony depends in large part on the course set for the body by the genes. But the changes in my case were not just physiological. Psychologically too, at some point all my interests either became earnest interests, or no interests at all. I offer an example from that most common measure of subcultural identification: music.  In the mid-1990s, I made the rare discovery (for an American) of Joe Dassin, Dalida, and other French and Italian pop stars from a generation prior.  I would put on Dassin’s “L’été indien” at parties and the guests would marvel at how treacly and over-the-top the string section was, how the rhythm made them think of ‘70s swinger parties of the sort Michel Houellebecq would later ruthlessly de-eroticize, or of some French smoothie in a Jacuzzi, again with a moustache, inviting a topless female reveler to ‘make love’.  And most of all they would marvel at how recherché my CD collection was, at how well it reflected the desire among those of my generation for music that fascinated precisely because it was originally created for listeners whose lives we could scarcely imagine.

And yet, today, my wife and I put on Joe Dassin when we are at our respective computers writing, for the simple reason that we enjoy the sound of it.  Why, my heart now wonders, would anyone listen to music that he does not, straightforwardly and earnestly, like?  Why, for that matter, would anyone take an interest in anything other than in view of its genuine interestingness?  Just what are the smart-ass youth, who like trucker hats precisely because they look down upon truckers, and who appreciate cowbells in music because naïve disco-goers once truly appreciated cowbells in music, trying to pull off? What, in short, is irony in its latest and dominant form?   

History’s greatest philosophical ironist conceived of philosophy itself as nothing more or less than a preparation for death.  When Socrates said that to philosophize is to prepare to die, and when Montaigne echoed this at the dawn of modernity, they did not mean that philosophy consists in tending to one’s last will and testament or constructing one’s own coffin out of plywood.  They meant that the project of becoming wise is one that culminates late in life in a stance of equanimity vis-à-vis one’s own mortality.  “I have seen men of reputation,” Socrates tells the jury about to convict him, “when they have been condemned, behaving in the strangest manner: they seemed to fancy that they were going to suffer something dreadful if they died, and that they could be immortal if you only allowed them to live; and I think that they were a dishonor to the state, and that any stranger coming in would say of them that the most eminent men of Athens, to whom the Athenians themselves give honor and command, are no better than women.”  His tranquil acceptance of his hemlock is a reflection of his wisdom.  Yet in his speech to the jury he also points out that he is now 70, and probably would not live much longer anyway.  His death is not met as a sacrifice, but with indifference (this in marked contrast to the death of Jesus Christ at 33).  No one could expect a youth to meet death with indifference.  A corollary of this point is that no one expects a youth to be wise. 

Philosophy today is age-blind, which is to say that (other than a few thought-experiments involving infants), philosophers talk about the way people think and act as though people do not go through stages of life.  Imagined rational agents, making decisions about the most just society from behind a veil of ignorance, or deciding whether to pull a lever at a switching station, are presumed to be adults, certainly.  But are they 20, or 70?  Isn’t it reasonable to expect different sorts of behavior in the one case than in the other?  There is general agreement that some degree of selflessness in one’s conduct is morally laudable, but the scientific evidence tells us that the changing quantities of hormones in the body throughout the stages of one’s life have a good deal to do with whether one will act egocentrically or not.  I find myself growing more concerned about the well-being of others, but I do not think that this is because I am becoming ‘more moral’. It is only because I am no longer driven by that mad fire that used to course through my veins and cause me to strive for nothing but my own advancement and gratification.  I couldn’t have done otherwise then, and I can’t do otherwise now. 

Race, gender, and sexual orientation have captivated academic imaginations for the last few decades, particularly among leftists in the humanities who had grown bored with the traditional focus upon class antagonism as the engine of history.  Race and gender are more or less fixed social categories, notwithstanding the opportunity medical technology has offered to a very small minority of people to change the biological basis of their gender identity, and notwithstanding the ultimate biological illusoriness of racial taxonomies.  Sexual orientation is fluid, even if the tendency in our society is to conceive it on analogy to race and gender, that is, as constituting part of one’s ‘essence’ and thus as being coextensive with one’s own existence.  Yet all the while age remains well outside the radar of the organizers of conferences and the getters of grants, and it is interesting to note in this connection that unlike sexual orientation there is no possible way to essentialize it, that is, there is no way to conceive of the predicate ‘…is young,’ say, as pertaining to the identity of an individual always and necessarily.  Being young, like sitting or sleeping, is something that can be both true and not true of the same subject. 

‘…is young’, as I’ve said, is a predicate that pertains to me less and less, and it is perhaps for this reason that I have, of late, begun to hope for the reintroduction into philosophy of reflection upon what used to be called the ‘ages of man’.  I do not know whether aging is something to be thankful for, as Socrates seems to have thought, but I do know with certainty that it is not something to be awkwardly and unconvincingly denied, as balding hippies, with their scraggly ponytails and their irrelevant cultural reference points, insist on doing.  And there is no use in pleading that, though the ponytail thins, the gut expands, and the stream weakens, one is nonetheless ‘young at heart’.  For the body is the body of the soul, and these outward signs of the approach of death are but reflections of internal changes.  Yet it is characteristic of the postwar generation to deny that the heart must grow old, to insist that it is free to follow a course entirely independent of the geriatric corporeal substance.    

But what I am concerned about is my own generation, those who have worn “moustache rides” t-shirts for reasons several degrees removed from their original intent, and its prospects for aging well, which is to say its prospects for dying with grace and equanimity.  At first glance, the fact that hipsters share irony with the West’s wisest condemned prisoner would seem to bode well for them.  Yet Socratic irony and hipster irony could not be more different.  Hipster irony has to do with taste, not truth, and it only makes sense relative to a certain context of commitments and preferences, while what Socratic irony strives for is a contemplative detachment from all partis pris.  In an absolute sense, there is nothing more in Death Cab for Cutie or Arcade Fire that commands one’s earnest and straightforward appreciation than there is in Boxcar Willie, Juice Newton, or Perry Como.  From a certain perspective, it is all garbage, and from another it is all fascinating.  Hipsters still hope to draw a distinction between the genuinely good and the merely humorously good, by means of a bivalent logic in the end no more subtle than the ‘cool’/‘sucks’ dichotomy through which Beavis and Butthead filtered the world.  An elderly ironist in contrast has had the time to watch enough cultural flotsam go by that he can no longer pretend that one instance of human productivity is intrinsically much more ridiculous than any other.  Fully convinced of this truth, he might truly be prepared to die: he knows what to expect from the world, and so expects nothing more. 

But that of course is no fun, while youthful irony is a blast.  It will thus be interesting to see in the coming decades whether the irony that has defined the world view of an entire generation of educated Western children will prove capable of aging along with those former children’s bodies.  It is still far too early to tell, though it is likely that the repellent example set by their aging parents, who remain deadly serious about the ‘accomplishments’ and enduring relevance of their generation, who never really learned how to be old because they remained so loyal to the moment of their youth, will serve as an incentive towards reflection on how to age well, which, again, the old philosophy tells us, is the same as to die well.

Even in my own case, it is far too soon to tell.  I am sure as hell not yet wise, as I find myself nowhere near ready to die.  Like some modern-day Ivan Il’ich, I cannot begin to imagine how I –who once impressed party-goers with my selection of “L’été indien,” and who mixed it seamlessly in the mid-1990s with some other bit of music that had just come out of London or Bristol, something they called ‘trip-hop’ that set the crowd to dancing on my packed living room floor– could possibly do that well.  I am serious, all too serious, about all those bits of flotsam to which I’ve happened to cling, and which have kept me buoyed and breathing.

Iasi, Romania
19 June, 2007

For a comprehensive archive of Justin Smith’s writing, please visit www.jehsmith.com

Did Bernard Kouchner really endorse the Iraq War?

by Alan Koenig

09iraqkouchner1450_2Two prominent Liberal hawks recently celebrated the arrival of Bernard Kouchner as French Foreign Minister, for here was a heroic humanitarian, the founder of the noble Doctors Without Borders, a tireless champion of the oppressed, who has risen to command the foreign policy of a nation that cravenly opposed the Iraq War. Christopher Hitchens sang his praises in Slate, and The New Republic reprinted portions of Paul Berman’s Power and the Idealists, a fascinating intellectual group biography of the European New Left and their rise to relative power. There’s just one problem with these paeans from the Liberal hawks, a small fact that Hitchens omits and Berman oddly misinterprets: Kouchner publicly opposed the Iraq War.

Kouchner had long decried the tyrannical horrors of Saddam Hussein’s Iraq, and he often berated the international community for not coming together to remove the dictator, but he repeatedly opposed the American invasion. This shouldn’t be a terribly complicated position, and Kouchner first put it in print in early February of 2003 when he coauthored a “manifesto” entitled Neither War Nor Saddam in which he opined that the “solution to Saddam will take time,” that “it can not proceed at the same time as military pressure,” and the United Nations should call together a conference to bring more international pressure on Saddam.

If you can read French:

La solution du problème Saddam prendra du temps. Elle ne peut procéder, en même temps que du maintien de la pression militaire, que de la prise de parole du peuple irakien telle que pourrait la favoriser la désignation d’un médiateur des Nations unies. Avant tout, nous souhaitons que les membres du Conseil de sécurité organisent sans délai une conférence internationale qui mette en lumière les exactions de Saddam Hussein et amplifie la pression conduisant a son départ, au lieu de tout faire pour fabriquer un nouveau héros.

(If you need a translation) Kouchner’s conclusions seem very clear even if your French is as atrocious as mine: “Non a la guerre, non a Saddam Hussein.”

Kouchner stuck to this line even a week before the war; during a debate at Harvard he continued to rail against Saddam’s despicable regime and oppose the war, just as he had stated in Neither War Nor Saddam:

He repeated his opposition to war several times in his half-hour speech and during a subsequent question-and-answer session. Yet even as he said the Iraqi people’s voices should be considered, he also said he’s sure some would approve of their nation being bombed if it meant being rid of Hussein . . . Despite the ongoing brutality, however, Kouchner said he also knows the brutality that war brings and said he does not support an American war on Iraq. [emphasis added]

So how does the intellectual historian Paul Berman read Neither War Nor Saddam? He starts out with the Kosovo crises and the bombing campaign against Serbia, and somehow ends up asserting that Kouchner proposed the same tactics for Iraq in Neither War Nor Saddam:

Kouchner wanted to try similar methods in Iraq, a series of graduated steps, in the hope that one or another of those ever more forceful measures would ease Saddam out of power, without having to resort to anything as violent and risky as a full-scale invasion. Give less-than-war a chance, was his idea–though the only way to do this convincingly was to brandish the certainty of all-out war as the only alternative. Kouchner belonged to a bipartisan, left-and-right political club in France called the Club Vauban, and, in the name of this organization, he and another club-member composed a manifesto under the slogan, “Neither War nor Saddam,” advocating these graduated measures.

“Brandish the certainty of all out war as the only alternative?” What about Kouchner’s claims that the “solution to Saddam will take time” and that this “can’t happen at the same time as military pressure?” And where did the call for a bombing campaign come from? Did Kouchner propose such a thing elsewhere and Berman mistakenly conflate the two statements?

I’ve been unable to locate any such statement of Kouchner’s, but Berman repeats his assertion that Kouchner advocated for a Kosovo like solution in the Spring issue of Dissent, while you can see for yourself that there’s no such mention in Neither War Nor Saddam. From this apparent misreading, Berman goes on to assert that Kouchner’s arguments justified the Iraq War:

But Kouchner’s argument about Iraq mostly focused on a specific reality, and this was the scale of the disaster in Iraq under Saddam’s rule. The grimness of the human landscape in Iraq, together with the plea for help that so many Iraqis had been making for so many years, sufficed to justify the invasion, even without reference to worldwide principles. Yet where were the champions of the humanitarian cause, the human-rights militants, who should have responded to these pleas?

Where were they? Perhaps, Mr. Berman, they were listening to his “Non a la guerre.” Lacking an accurate understanding of Kouchner’s manifesto, some of Berman’s narration appears contorted and bizarre. Throughout Power and the Idealists, he seems confused by Kouchner’s gentleness, his tolerance, for the positions of his anti-war debating partners — a confusion that can be lifted by reinserting Kouchner’s own opposition. For instance, in a debate between Kouchner and the famed European New Leftist Daniel Cohn-Bendit, Berman wonders where the fireworks are:

Cohn-Bendit did call for Saddam’s overthrow, actually. It was just that, in Cohn-Bendit’s estimate, the proper way to overthrow Saddam, as he explained, was to maintain a multilateral pressure, and help the Iraqis themselves overthrow their own dictator, someday. Kouchner could hardly take this seriously. Cohn-Bendit’s program was a nonprogram. A make believe. Kouchner didn’t point a finger, though.

Hmmm. Maybe Kouchner didn’t point a finger because Cohn-Bendit was so close to his own position. By the time Berman writes his profile of Tariq Ramadan in The New Republic, Kouchner’s position on the war — has become for Berman — a “highly modulated” endorsement of the war. So much for “Non a la guerre.”

In Berman’s defense, it is not difficult to find media and other analyses that believe that Kouchner supported the war, though many of them, like Stephen Holmes (writing in The Nation), do so by simply repeating Berman’s apparent error. Now, it is possible that Kouchner did, on occasion, voice support for the Iraq War by calibrating his responses according to his audience, but that wouldn’t make him much of a hero to the Liberal hawks (nor answer how they missed out on so much of the content and meaning of his public opposition). Indeed, in a lunch with the Financial Times Robert Graham, Kouchner was reported to have said:

Saddam was a monster. The case for going to war to get rid of him was not one of weapons of mass destruction – they probably weren’t there anyway. It was a question of overthrowing an evil dictator and it was right to intervene.

You could read this as saying that it was right to intervene for humanitarian reasons, and that case wasn’t adequately made by the Americans, as Kouchner, Joschka Fischer and Berman have complained. But there is some obvious ambiguity here, and the full quote tends to get attenuated when repeated, as does Kouchner’s public opposition:

Kouchner was one of the few in France’s political elite to justify military intervention against Saddam Hussein – on humanitarian grounds, not because Iraq might have been seeking weapons of mass destruction.”It was a question of overthrowing an evil dictator, and it was right to intervene,” Kouchner said in 2004.

(The NYTimes, which originally published the article above corrects the record for another article here.) It’s also possible that Kouchner continued to rail against Saddam with all the righteous passion for which he is so famous, and in his denunciations, certain people missed out on the qualifiers against the American-led invasion. Either way, Kouchner at some point had to have heard of the ambiguity surrounding his position. Why didn’t he clarify or correct the record? As Oliver Kamm has noted, Kouchner apparently did so in May of this year in the pages of Le Monde:

Regarding Iraq, [Kouchner] recalls that, without sharing the tone of French diplomacy at the time, he opposed the war. “My position … is the one I expounded in a viewpoint entitled ‘Neither war nor Saddam’, published in Le Monde on 4 February, 2003…. It is the only one I have defended. I wrote: ‘Above all, we wish the members of the UN Security Council to organise without delay an international conference to make clear the abuses of power of Saddam Hussein and increase the pressure leading to his departure, instead of doing everything to manufacture a new hero. We do not wish for war, but we do not want the martyrdom of the Iraqi people to continue. No to war, no to Saddam Hussein.’” [emphasis added to Kamm’s translation]

What we’re left with after all this exegesis are two questions. How have Hitchens and Berman missed Kouchner’s public opposition to the Iraq War, and what does his dissent mean for the future of interventions that wish to claim humanitarian justification?

Teaser Appetizer: Why Does BIL Drink Water?

On Saturday morning, when I entered my kitchen to make tea for my brother-in-law (BIL) – who was visiting for the long weekend – I found him sitting at the kitchen table, looking intently at a row of glasses of water: eight of them, filled up to the rim.

Glassesofwater“No thanks, I will not have tea. I’d rather have this water.” He declared.
“All eight of them?” I sounded surprised.
“ To flush the toxins, that is how many you need.” he said.
“Eight glasses can’t flush your sins.” I teased BIL, the hedge fund manager.
“I said toxins, not sins.”
I sat there and watched him with curiosity: his eager gulps of the first glass waned into reluctant slurps of the fourth and forced sips of the last.

Relishing my tea, I envisioned the silent journey of this water through his body.

The water falls into BIL’s stomach, which absorbs some and pushes the rest into the intestines. The surface of the small intestines sucks it up – not like a sponge, but by actively creating an osmotic gradient. (Water travels by osmosis from lower osmotic concentration to higher. The amount of dissolved solutes in water determines its osmolality; more solute concentration generates higher osmolality.) The cells lining the intestinal lining actively absorb sodium ions (salt) and extrude them into the microscopic space between the cells, which creates a higher sodium concentration in this area. Water permeates into this space from the intestinal lumen by osmosis and then leaks into the blood stream meandering in the minute capillary blood vessels.

BIL’s intestine must cope with the massive deluge; about 90% percent of water will enter the blood circulation through the small intestine. The permeability and absorption of water will decrease as it travels to BIL’s colon.

Blood circulation carries water to the farthest cells and inundates them. The cells have been used to this; they remember the times when they drifted alone in the oceanic soup four billion years ago. After many mutations and missteps they evolved a wall around them – the cell membrane – to protect them from the surrounding poisons and maintain their internal chemical tranquility, which includes maintaining a normal osmolality of 285 to 290 mili-Osmols. The cell walls maintain this constant osmolality by rejecting the entry of sodium into the cell and preventing the escape of potassium and phosphate from inside.

When BIL’s consumed water arrives, it dilutes the fluid surrounding the cell and drags down the osmolality. The cell membrane lets only the water permeate into its interior thus maintaining the osmotic equilibrium. The dehydrated cells will keep the water but the already hydrated cells reject excess water.

Water also reaches BIL’s brain and heart – the two organs with sensors, which detect water load.

The hypothalamus part of brain senses variations in osmolality (solute content) and in response secrets a chemical messenger: anti diuretic hormone (ADH), which regulates the volume of urine excretion. An excess of ADH decreases and lack of ADH increases the urine production.

The heart has pressure sensors, which read the volume of circulating water and produce another chemical – natriuretic peptide – in response. Higher circulating water volume induces this peptide, which in turn coerces the kidneys to get rid of excess water.

BIL’s water binge does two things: it decreases the blood osmolality and increases the circulating volume. This shuts down production of ADH and enhances the manufacture of the peptide form the heart. Consequently, kidneys oblige and get rid of the excess water.

That is exactly what I observed. BIL got up after the 4th glass – and a few more times later- to ease himself. The water had flooded BIL’s kidneys; ADH from the brain and the peptide from the heart had assaulted his kidneys and poor BIL had to frequent the toilet.

I visualized the nephrons of BIL’s kidneys in overdrive. Nephron is the filtering and urine-manufacturing unit in the kidney. And there are 1.4 million of them. This exquisite, intelligent, U shaped microscopic tube is the final arbiter of the water volume in BIL’s body.

BIL’s water filters into one end the nephron, travels through the U loop and trickles out at the other end into to the urine collecting system. Since BIL has excess water in his body, each nephron makes more urine and the union of 1.4 million members of the urine production trade sends BIL to the toilet frequently.

The nephron has the ability to respond to the sum of blood volume or pressure, sodium concentration and the quantities of floating ADH. Through these mechanisms, it can produce varying volumes of urine of different solute concentrations. The function of the kidneys is to excrete solutes unwanted by the body and the water serves as a solvent carrier.

Kidneys have a maximum ability to excrete up to 1200 mili-Osmols (mOsm) of solutes per Kg of water. BIL has to eliminate, an average of 600 mili osmols (mOsm) of solute daily. Since the maximal urine concentration is 1200 mOsm/kg water, the minimal urine volume BIL needs to make is 500 ml to excrete 600 mOsm. With BIL’s normal kidneys, mere 500 ml should be enough to “ flush the toxins – and sins.” But BIL just inundated his poor unsuspecting kidneys with eight glasses of water and the penance for this ‘sin’ is the trip to the toilet.

BIL’s body handled the flood better than Bush managed Katrina. Water gushed to the heart, which pumped it vigorously to the farthest crevices of the body. As it traveled farther it slowed to a trickle, then seeped through the accommodating tissues and finally permeated into the cells. BIL’s body fluid regulatory apparatus sensed the deluge and the sirens went off.

OK, eight glasses is no Katrina, but we will agree BIL did it better than Bush.

Let us also be fair to BIL: 60% of his body weight is water and a mere loss of ten percent can be lethal. Water still surrounds and nourishes each cell, like it did when the cell was a complete organism in the waters of early evolution. When we left the oceans to venture onto land, we carried our share of water with us. The distribution of solutes and water is of utmost importance to normal cell function.

BIL maintains the volume of water in his body with exquisite precision of thirst, intake, absorption and excretion. His kidneys, hypothalamus, heart, cell membranes and the thirst mechanism work in unison and simultaneously.

On a normal day, he looses about 1500 ml in urine and another 500 to 1000 ml in breath, sweat and stools. He needs to replenish this by drinking 2 to 3 liters of water daily.

Should BIL drink plain water or a sports beverage?

Water is the osmotic slave of salt and follows sodium with utmost fidelity. BIL does need some sodium in his guts for efficient absorption of water and to absorb sodium the intestinal cells need a little sugar. Water with a dash of sugar and a pinch of salt will suffice. Sugar exceeding 8% of the beverage may actually slow the water absorption.

Here is a recipe: one liter of water; 1/3 cup of sugar; ¼ teaspoon of salt; add lemon or orange flavor; refrigerate and drink. This is the cheap homemade ‘Gatorade’. (BIL can use the other expensive one just for the ceremonial drenching of the coach.)

But BIL being a hedge fund manager lives by the dictum: nothing succeeds like excess; moderation is a fatal habit.

He inundates his cells in water with an atavistic compulsion and like so many other beliefs; he holds that drinking eight glasses of water in the morning cleanses the depths of his interior. It would be more physiological, if he spread his drinking through out the day.

BIL cannot drink water in the morning to hedge against the dehydration of the evening.

Monday, June 18, 2007

Grab Bag: Follow that Gay!

3qd_image01In the mid 1990s, urban economist and sociologist Richard Florida devised the “gay index,” a tool used to monitor and predict cities that could host profit-generating high-tech industries. The index essentially correlates the number of gay people living in an area and how many high-tech firms are located there. This, of course, is a good thing as it enables regional planners to better accommodate growth and accordingly adjust all of those terribly meaningful strategic plans that herald beautiful, functional cities. The gay index is directly linked to Florida’s argument about the economic advantages associated with attracting the “creative class” to cities, which will lead to revitalization, regeneration, and economic success. In addition to the Gay Index, Florida proffered the “bohemian index,” which measured the number of artists, writers, designers, and general “Cool” professionals located in certain cities against the presence of those same high-tech industries.

Between the gay index and the bohemian index, creative classes and cool factors, Florida established a lexical melting-pot (to borrow another phrase of his) of ambiguous social terms to describe economic patterns and predict cities that might next host this seething mass of culture and hip-ness. An attempt to tackle this subject comprehensively requires study beyond the scope of a blog-essay, and has been done by the author’s countless critics—detractors, deriders, and general disbelievers who have questioned his methodologies, data, and value as both an economist and sociologist—to which Florida has, admirably, responded (though, I must say, rather unconvincingly) in subsequent works.

Rather than attack him on economic grounds, then, or even within the discourses of urban sociology, I’d like to just take a moment to appreciate Florida’s use of and take on gayness for a brief moment. Seriously, just to back up a second. The gay index? Excuse me? So, somewhere along the way it became ok to pin tracking devices under urban homosexuals and exploit their flight into neighborhoods into which they are essentially exiled in order to capitalize on planning strategies and speculative development? Diabolical! It’s a scheme concocted by a Bond villain hit by the gay bomb (post-fabulous stress disorder!), it’s Jane Jacobs on poppers and ethnography written in Polari.

3qd_image02The million-dollar question, of course, is whether or not Mr. Florida is himself a card-carrying gay, ripe for the tracking and with a miraculous and preternatural ability to identify potentially hip—and therefore economically prosperous—cities. After repeated Google searches (“Richard Florida gay,” “Is Richard Florida gay?” “Richard Florida flaming homosexual,” “Richard Florida hypocrite,” “Is Richard Florida secretly or openly gay or he just a misguided economist who has no concept of how unbelievably offensive his social experiments and de-humanizing measures are?” and on and on) I am still not sure. Richard Florida has a tendency to wear his shirts with the top few buttons undone, exposing a curiously smooth chest. He is generally well-coiffed (or at least rather meticulously so). But that doesn’t really get us too far. I digress, however. His sexually isn’t really that important.

What is important, however, is Florida’s use of gayness as a fetishistic mechanism through which to identify market indicators. I’m only going to highlight several distinct ways in which his project demonstrates problematic views of gayness and thus renders his “index” fairly irrelevant, though there are innumerable reasons to find his argument ridiculous. The first is to look at his argument’s development. In his best-selling work The Rise of the Creative Class (New York: Basic Books, 2002), Florida introduces the gay index in a chapter titled “Technology, Talent and Tolerance.” He begins by talking about the importance of tolerance in attracting high-tech industries to cities, citing past work by authors such as Pascal Zachary that point to the importance of racial acceptance and openness to immigration as paramount to innovation and economic growth in urban environments. Florida (and Zachary by proxy) cites key statistics of mass immigrations to American cities in the 1990s—many of which moved to New York, Chicago, Phoenix, Atlanta and Los Angeles.

Florida highlights the correlation between patterns of migration and economic growth in these same cities, which provides the basis for his comparable correlation between gays and high-tech industries. Pausing, for a moment, on the migrant-growth relationship, it is crucial to highlight the misrepresentation at play here. As pointed out sociologist Saskia Sassen (who has seemingly endless empirical data to support her argument) the vast majority of immigrants to move to these cities support advanced industries as service workers—low-wage, low-skill laborers whose quality of life is vastly different from the “creative class” that earn the cities their reputation. Unlike Florida’s romanticized perception that countless model minorities are arriving, en masse, with tears in their eyes and hope in their hearts to start valuable enterprises across America, the reality is that most of these immigrants end up with bottom-of-the-barrel jobs that no one else will take.

I realize that I’m now attacking Florida’s methodology, which I said I would avoid, but here I just want to note that the logical fallacy of false causality—that the immigrants lead to increased creative industries—has a significant bearing on gays and high-tech industry. Florida is careful to point out that of course his argument isn’t that high-tech industries are full of homosexuals. In fact, few gays work in these profit-generating industries at all. What Florida points, out, however, provides a meaningful insight into the role gays serve these white (or Indian or East Asian—Florida’s favored minorities) engineers and computer-nerds: that they decide to locate somewhere suggests that that geographical space is open and accepting. In other words, if gays are allowed to live there, won’t nerds be too? (Seriously, this is taken straight from Florida. Page 258 of the paperback.)

If Florida’s hope is that a large number of gay citizens act as a predictive index for the potential of a city to house high-tech industry, what we’re really talking about is gays as guinea pigs. Florida’s cities are aligned with patterns of habitation within cities, particularly within gentrification arguments. The familiar narrative goes: first the gays move in, then the artists, then the yuppie hipster families, then the middle class. But obviously the gays weren’t first. The narrative implies a certain kind of urban grey-zone as a beginning point, where drug-addicts, non-model minorities, and general undesirables rove the streets, leaving opened fire hydrants, burning garbage bins, and a general gritty cacophony wherever they go. That Florida first equates gays (“the new outsiders”) with immigrants, and then as the precursor to the bohemian influx, demonstrates the role that the homosexual plays in this perverse narrative—bridging the gap between poor ethnics and young artists.

This role is inextricably linked with what French author Guy Hocquenghem terms “the criminalization” of the homosexual—by virtue of being gay, these citizens occupy a curious position of being criminal enough to live in the margins while white enough to make those areas appear safe. And yes, for the most part the gays in these neighborhoods are white—from London’s Vauxhaull (now also part of Brixton), Boston’s South End, and New York’s Chelsea to Chicago’s Boystown and Los Angeles’ West Hollywood. And so the gays are the guinea pigs, sent to the periphery to make it safe for young white artists and café-goers all the way through to middle class families, negotiating color and difference and mediating what is edgy and safe. Before I’m accused of setting up a straw man, though, I should acknowledge that Florida talks about cities and economic growth, not about neighborhoods and gentrification. But by evoking the argument of acceptance and tolerance, and of nerdy IT guys walking around without fear of harassment, I would argue that Florida is talking about cities on a neighborhood level. Clearly, even if the gay index for a city is high, there’s no argument that a tech-firm should locate in undeveloped, high crime neighborhoods. To reap the advantages associated with the diversity of the citizens, they must locate where those groups reside, furthering the process of development and gentrification.

3qd_image03And what after gentrification? Like neighborhoods that attract countless immigrants, where do these households go when displaced by the influx of suburbanites who can now walk the streets? It depends on the city, and the answer is rarely promising. But thank god we’ve attracted “the creative class” (read: college-educated, safe, “nerdy,” largely white or model ethnic), and attracted revitalization and economic growth. Thank god those gays have such a good eye for design and interior decorating, for building rehabilitation and kitschy stores, for gourmet food and fine living. Thank god they make neighborhoods consumable by all, indicate where cities should spend their money and where new firms should locate.

Forget that, according to Florida, those new firms don’t employ a number of gays proportionate to the area. Forget that the “new outsiders” (not to mention the old outsiders) see a relatively small benefit in the transformations of cities and neighborhoods (is there anyone left out there who sincerely believes in trickle-down?). Of course I don’t equate urban (mostly white) gays with low-income populations. Homosexual partners and households have been demonstrated to have inordinately high incomes—but these aren’t necessarily related to the industries they attract. Their role in the urban economy is far more complex. Florida, of course, digs his own grave by treating such a complex facet of the population as a singular unit—an “index”—and yet his ideas are hard to ignore. He is a pop-economist in the most ignoble sense of the word, promulgating stereotypes and recommending a business model that embraces gentrification and exploitation all within the language and presentation of a marketable “strategy.”

A final question: while gay men might be romanticized for their sense of aesthetic and design mixed with urban grittiness—the perfect combination for faux “thrill”-seeking city-dwellers—where the hell do lesbians fit in to Florida’s framework? I’m thinking of New York’s Park Slope and Stoke Newington in London, and I remain a bit unsure of how we can exploit them.

On Why and When Fiction Writers First Publish

“If you don’t make masterpiece by time you twenty five you nothing,” went the advice of a drunk literature professor. I was a sophomore. The nineteenth century authors I admired had all first published before the age the professor put forward. Twenty five became the longitudinal line where my flat world ended. Twenty-five was crossed without masterpiece or incident. I found solace in the biographies of contemporary writers, most of whom first published at an older age.

Why the age difference from one century to the next?

To begin I posit that the apprenticeship period of a writer, before a publishable novel is completed, lasts approximately eight years and involves three components: 1) lots of writing, much of it crap, an unfinished or rejected opus or three, a novel that was talked about more then it was ever written, some short stories; 2) A fair amount of reading, not from any cannon in particular, enough to get a sense of what is out there; 3) Life experience—bullfighting and shooting heroin, sure—but more having lived and become aware of one’s existence in a way that can be processed many many times over to be used in stories. The healthy realization that instead of writing the greatest book ever one should focus on a good story one can tell well can be filed under the third component. Factor in necessary talent and the budding writer is on his or her way to a literary debut.

(The debut may never take place and occasionally occurs after less time).

Tolstoy completed his apprenticeship young in part because he was mind numbingly rich—he lived on a Rhode Island sized farm that was worked by slaves—and had lots of free time. By free time I mean the time to work as an around the clock unpaid writer, which in Tolstoy’s case meant he was able to pump out short stories thick enough to qualify as assault weapons by 23. Dostoevsky’s provenance was more middle class, his father was a doctor to the indigent, but a middle class that came with amenities far greater then full cable and a second car. The Western world was less equitable with a lot of poor people available to do chores and errands that would be done by the budding author today. Dostoevsky too, pre-gulag, had his free time, first publishing to great fanfare at 23.

For those with access to it, education was better in the nineteenth century. The richest writers had private tutors. The writers who went to school, Balzac, Dickens intermittently, received better more thorough educations then are readily available today. Memorization of poems was central to understanding literature, languages were rigorously taught, correspondence and the discipline to write constantly were imperatives. Without looking far beyond the routines that were handed to them as adolescents, they fulfilled large parts of their apprenticeship.

The broadly romanticized lost generation of the 1920s first published at a slightly later age. The middle class was larger, education was more universal. They came from a range of households and schoolings—Dos Passos, loaded, boarding school and college; Hemingway, not so loaded, public school and no college. But the available education was still better then today’s. Reading was more a core part of curriculums, correspondence remained essential, Greek and Latin were taught. And it is not that I believe a classical education is best, simply that writing is an exercise in shaping language and early knowledge of its anatomies feeds when a person starts thinking as a writer. The challenge was finding the time to write, which is part of why they all went to Paris—still reeling from the WWI, economically brittle Paris was cheap. The ability to live well for not much gave them the incentive and time to finish their first works. Getting to Paris meant time working and traveling and that interval tacked on about two years to their debuts.

Why and when people published in the 19th century was mostly a matter of pedigree. Why and when people first published in the 20th century was a matter of cheap rent. From Paris, to the West and East Village, to Berlin, writers roamed much of the Western world looking for cities in economic decline where they could work unperturbed.

Today education is essentially universal, but of mixed quality. In the United States the solution is a masters degree in writing where the differing levels of education can be calibrated, the safety of a campus buffering young writers from economic ebbs and flows. A student at NYU or Columbia can live in currently unaffordable New York thanks to subsidized low rent and money from a job teaching undergraduates once or twice a week. Because the youngest a person would likely enter grad school is 22, masters programs have pushed the age of debuts up as people fulfill the requirements of their apprenticeship at a later age.

I am ignoring will. Irregardless of provenance, schooling and available time, where the writer has had the will and talent he or she has published. Kafka had a full time job at an insurance company. The Chilean writer Roberto Bolano, the son of a truck driver, traveled the world, holding jobs no more exalted then security guard. Both men wrote at night and published late in life, their reputations propelled far into the future by the forces of their wills. Black writers of the mid-century, Baldwin and Wright, wrote their first works in the vacuum of a society closed off from their voices. They established places for themselves with their wills. Masters programs have had the positive effect of honoring and financing the bright talents who earlier pushed forward alone. But the rest must pay, a lot, and the programs have had the inverse effect of excluding those of mixed or still growing talent and little funds, and not just from an education, but from direct avenues to agents and publishing houses.

A corrective mechanism exists. During economic downturns the plights of the excluded are chronicled and sensationalized in pulp. Pulp’s goal of titillating is easier to achieve then literature’s goal of moving the reader. The apprenticeship is shorter and can respond to social changes more swiftly. New York currently has 800,000 millionaires and the poorest urban county in the nation, the Bronx, splitting the city between a community who can afford graduate schools and comes from a decent education and another which comes from stunted public schools, 20 percent and up unemployment and high crime. In the Gilded Age, when the Lost Generation fled to Europe, pulp was a local reaction by those who could not afford a ticket out of the country. The best pulp works are considered literature. The rest are no better then the genre exercises they aspire to be.

On fold out tables on 125th Street in Harlem, near the court houses on Chambers Street, on Fulton Street in Brooklyn and on Third Avenue in the Bronx, a new pulp is sold under the moniker of urban literature. A handful of titles have sold in the hundreds of thousands; Borders and Barnes and Noble, depending on the store location, dedicate sections to the genre. I have attempted reading some urban literature and found them on the whole unmoving and conventionally titillating; but I am open to attempting more titles. I contacted the office of Triple Crown Publications, which specializes in the genre, wanting to know what the average age of their authors was. The answer was between 20 and 30, the youngest, Mallori McNeal, was 16 when she first published. If literature is what Ms. McNeal wants to write, a couple drafts and some experience from now, she’ll be 24.

monday musing: loving michnik

Adam

I can’t pinpoint the specific day or time that I fell in love with Adam Michnik. Most likely it is something that crept up on me slowly. I liked him, I read him, I liked him, I read him some more, and then suddenly, I loved him. I finally came to see all of this just recently, thumbing through the essays and interviews collected in “Letters From Freedom: Post-Cold War Realities and Perspectives.”

These are the thoughts of a man who was of his time, acting in his time, and yet simultaneously able to see it all as if from on high, as if a version of himself was hovering above the other him, watching himself and reassuring himself from the late ’60s until the early ’90s as he turned into one of the preeminent dissidents of the Eastern Bloc. Anyway, something must have been guiding him, some impish little daimon sitting on his shoulder and convincing him through imprisonments by the communist government in Poland that history was on his side—or at least basic decency—although that proposition often must have seemed utterly laughable. Perhaps he now takes on a special glow precisely because he won, because he was right that a dogged emphasis simply on telling the truth, on the basic dignity of an individual human being, would rise to the top through tough times.

If he had been wrong, if he had failed, if history had broken differently, then Adam Michnik wouldn’t have the glow. On the cover of my copy of “Letters From Freedom” there’s a picture of Michnik. He has his right hand on his forehead and he’s looking out through the crook of his arm with a faraway gaze. It is the gaze of a cocky son-of-a-bitch who got it right. And he not only got it right, he got it right and refused to gloat about it very much, was constantly aware that you never get vindication until you stop looking for it and even then life goes on. That’s why he always stressed that all he really ever wanted for Poland was normalcy, boredom. “Grey is beautiful,” he said, and “democracy is a continuous articulation of particular interests, a diligent search for compromise among them, a marketplace of passions, emotions, hatreds, and hopes; it is eternal imperfection, a mixture of sinfulness, saintliness, and monkey business.”

He was tired of all the crap, all the weird and shitty historical promises that continuously clowned the 20th century. That’s the truly funny, maddening, and then eventually loveable thing about his cocky son-of-a-bitch gaze. It is the cockiness of modesty. This is a man who wrote a book in prison that concluded with a delicate quote from Julian Tuwim directed to the communists of the world: “Kiss my Ass.” He’s pretty sure he’ll get the last laugh because he’s the only one in the room who isn’t promising much, who’s just fighting for a political arrangement whereby everyone has the opportunity to make an ass of himself. That is his vision of civil society and it is inspiring in its mundanity. Let us have, Michnik says with a Polish twinkle in his eyes, a civil society whereby we can parade down the street like the fools we are and be reasonably sure that none of our neighbors are reporting that fact to the local branch of the secret police. There are loftier visions for humankind no doubt. But I’m not sure that there are any more humane.
Pl04_01b

It is that goofy beautiful vision of civil society that enabled Michnik to put his cocky son-of-a-bitch gaze on and stare down General Jaruzelski from across the historical divide when martial law was declared on December 13th, 1981. Let us not forget that those were scary times, big times. Michnik had his gaze but Jaruzelski had his huge tinted glasses that he must have smuggled out to Kim Jong-il after The Wall came down. Michnik stuck to his guns, the small and harmless guns of a civil society for normal people. He was convinced that he would eventually emerge into “the bright square of freedom” but he also worried that he and his contemporaries might “return like ghosts who hate the world, cannot understand it, and are unable to live in it.” He prayed that “we do not change from prisoners into prison guards.” After Michnik got his victory he was still happy to sit down with General Jaruzelski and have a friendly chat about those days and their disagreements. You can read the exchange. It is published as “We Can Talk Without Hatred.”

And that is why, I suppose, I have fallen in love with Mr. Michnik. It is a special category of love. Its broader genus could be labeled ‘literary love’; This is when we fall in love with those individuals who have written things that confirm for us our deepest and innermost sense of what we hope the world can be, if only in bits and patches. This kind of communication is intimate and powerful, and literary love—for its lack of physical expression—can be a swoonful and insistent kind of love. One must read ever more works from the hand of the beloved. Even the crappiest short story carries with it portentous weight and is scoured for secret messages from the writer to the one and only person who truly understands that writer, ‘Me’, whoever that ‘me’ may be. Within the species ‘literary love’ is the genus ‘dissident-writer love’. ‘Dissident-writer love’ has all the intensity of ‘literary love’ with the added bonus that the dissident writer not only pens brilliant essays, but is also wrapped in a veritable halo of personal courage and potential martyrdom made all the more enticing if, like that cruel seducer Michnik, the dissident downplays and sidesteps every attempt to crown him as hero. It certainly worked for me. I confess myself a man in love.

Monday, June 11, 2007

Monday Musing: Why There Are So Many Men

Confusion reigns in many popular discussions of evolution, and 3QD is not immune. I was inspired to write this Monday Musing today at least in part by a comment left by Ghostman on a post about autism a few days ago. In it, among other things, he theorizes that:

Ghostwr…autism, far from a brain disorder or malfunction, is an evolutionary reaction to the electrified, computerized world, and that once our brains iron out the wrinkles, we will come to look at modern autism as the first difficult steps toward a biological advancement of the human brain—an evolutionary improvement in the way we think, compute, and, yes, imagine…

…I believe the electrified, computerized world is actually changing the makeup of our brains. And that autism is one of the effects of this change…

…Consider the two most well-known symptoms of autism: lack of social skills (encompassing language, empathy, etc.) and enhanced recognition of and appreciation for patterns (often including improved memory and mathematical ability). These, I thought, do not seem to be the characteristics of a human; they are the characteristics of a computer. Computers are bad at emotions, language, social situations. Computers are good at math, memory, patterns. Furthermore, as one reads the literature, one is struck by how many teachers, parents, therapists, etc., comment on how compatible their autistic students, children, patients are with computers. Half of them seem outright amazed. But if one thinks that autism comes largely from computers, one would not be amazed by this, one would expect it…

—Ghostman, June 5, 2007

It’s late at night. It is too hard for me to attempt a sympathetic interpretation of this, and in the space that I have, I really cannot even seriously address the various confusions about evolution that are displayed here. (Even if brains were changed by “electricity” or “computers,” whatever that means, you should know, Ghostman, that ONLY changes to the DNA of the nuclei of sperm or egg cells can possibly be passed on to one’s offspring—and that is just one of the many misunderstandings of evolution that you betray.) Ghostman, I have no doubt that you are well-intentioned, but, my friend, you’ve got to learn something real about evolution before popping off, okay? Instead, all I can do is make my column today all about how Ghostman and others can most quickly educate themselves about evolution and its surrounding theory.

Unlike, say, quantum theory, the theory of evolution is something a lot of people think they understand pretty well. After all, no advanced math is required to understand the silly and tautological phrase which for some represents all there is to know about evolution: “survival of the fittest.” (Who are the fittest? Why, those that survive, of course!) Evolution is an elaborate and broad and subtle theory, with which one needs to spend some years to even begin to get a sense of its richness. (And parts of it do happen to be best expressed mathematically.) Luckily for non-biologists like me (and Ghostman) there exists a beacon of hope in the form of a book: thirty-one years ago, Richarddawkins Richard Dawkins wrote what in my mind must be the best presentation of the complexly intertwined ideas and concepts that constitute the theory of evolution, since Darwin wrote The Origin of Species itself. I am referring, of course, to Dawkins’s magisterial work, The Selfish Gene. My column today can be seen as essentially an exhortation, a request, even an abject plea: if you haven’t read this book, please click here now, buy it, and read it from cover to cover as soon as possible. (Get the 30th-anniversary edition, which restores Robert Trivers’s introduction to the original, which had been deleted from subsequent editions, and which also contains a brand new preface by Dawkins detailing the book’s history, in addition to the then-new preface by Dawkins for the 1989 second-edition, and, of course, the original preface. The bibliography has also been updated.) If you haven’t read it but think that you already know what it is about, you just aren’t getting my message. Even Dawkins’s foes (such as H. Allen Orr, who recently trashed Dawkins’s recent book, The God Delusion, in the New York Review of Books) usually admit that The Selfish Gene is one of the most beautiful and clear expositions of science ever written. And it is not about some one thing or idea that can be easily summarized Cliff Notes-style. You just gotta’ read it. Even in an earlier Monday Musing in which I mostly criticized Dawkins for unnecessarily implicitly defending a certain philosophical theory of truth in some of his writings, I also had this to say:

Richard Dawkins has been an intellectual hero of mine since college, where I first read The Selfish Gene. Though I thought I understood the theory of evolution before I read that book, reading it was such a revelation (not to mention sheer enjoyment) that afterward I marveled at the poverty of my own previous understanding. In that (his first) book, Dawkins’s main and brilliant innovation is to look at various biological phenomena from the perspective of a gene, rather than that of the individual who possesses that gene, or the species to which that individual belongs, or some other entity. This seemingly simple perspectival shift turns out to have extraordinary explanatory power, and actually solves many biological puzzles. The delightful pleasure of the book lies in Dawkins’s bringing together his confident command of evolutionary theory with concrete examples drawn from his astoundingly wide knowledge of zoology. Who doesn’t enjoy being told stories about animals?

SelfishgeneWhat I’d like to do in the rest of this column today (in my admittedly ever-desperate hope that it may actually convince someone who hasn’t, to read the book) is give a small example of just how brilliantly Dawkins explains questions in evolutionary biology and then answers them in a profoundly satisfying manner in The Selfish Gene. I have chosen this particular topic out of the extravagance of interesting biological issues that Dawkins presents in his book precisely because I had never even thought to formulate the problem, much less guess its elegant and (I think, I hope!) easily-grasped solution before reading him, and because I think I can present it reasonably briefly (we’ll see!) . So without further delay, here’s the problem: Why are there so many men?

I’ll explain. Women, because they must carry a child for 40 weeks, can only have a rather limited number of children in their lifetimes. Of course, there are limits to the number of children that men can have too, but they are much higher in terms of the actual numbers. (I used to be a big reader of the Guiness Book of World Records, and vaguely recall reading at some point that some king or other of Morocco holds the confirmed record for men with more than 700 children! Look it up if you like, but you get the idea of the difference.) Now, those who believe that evolution works for the benefit of groups of individuals, such as species (they are called group-selectionists, and the late, great essayist Steven Jay Gould was one, but he turns out to have been wrong in this, as in quite a few other things—punctuated equilibrium, anyone?), must answer the following question: in a population of humans, fewer than 10% males in a population would suffice to succesfully mate with all the females, so why are 50% (roughly speaking) of humans males? Well, maybe women need men around in some marriage-like situation to take care of their children, otherwise not enough of their children would survive. This is plausible, after all. But then, why is the proportion of men to women almost exactly 50/50? How come it’s not 45/55, or 55/45 for that matter, depending on exactly how much the males are needed? Look at some other animal species: we find that cats have the same almost exact 50/50 ratio of males to females. So do dogs, cows, mice, fish, chimpanzees, birds, and walruses. Some species of animals share parenting duties equally between the male and the female, while in others, the female puts in almost all the effort in raising children, but all the ones I have mentioned have the same 50/50 ratio of sexes. Why?

(I am not even mentioning the fact that even before the conception of a child, the female has already put in much more effort in producing it than the male has: consider a species of bird in which the male and female spend equal amounts of time hatching the egg after it is laid, and then also spend equal amounts of time and effort feeding and caring for the hatchlings into adulthood: the female has already made a much greater investment by laying a relatively huge egg—imagine the effort a female chicken expends finding and eating enough food to lay one of the super-nutritious eggs in your refrigerator. Sperm, meanwhile, are a dime a dozen-million! This greater investment of energy on the part of females is the reason, by the way, that human females produce one fertilizable egg a month, and I produce several hundred million sperm a day.)

Let me tell you something about walruses: most walrus males will die virgins. (But almost all females will mate.) Only a few dominant walrus males monopolize most of the females (in mating terms). So what’s the point of having all those extra males around, then? They take up food and resources, but in the only thing that matters to evolution, they are useless, because they do not reproduce. From a species point-of-view, it would be better if only a small proportion of walruses were males, and the rest were females. In the sense that such a species of walrus would make much more efficient use of its resources and would, according to the logic of group-selectionists, soon wipe out the actual existing species of walrus with the inefficient 50/50 ratio of males to females. So why don’t they?

Here’s why: because a population of walruses (substitute any of the other animals I have mentioned, including humans, for the walruses in this example) with, say, 10% males and 90% females (or any other non-50/50 ratio), would not be stable. Why not? Remember that each male is producing almost ten times as many children as any female (by successfully mating with, on average, close to ten females). Imagine such a population. If you were a male in this kind of population, it would be to your evolutionary advantage to produce more sons than daughters because each son could be expected to produce roughly ten times as many offspring as any of your daughters. Got it? Reread the last few sentences and convince yourself that what I am saying is true. Look, suppose that the average male walrus fathers 100 children, and the average female walrus mothers 10 baby walruses. Okay? Here’s the crux of the matter: suppose a mutation arose in one of the male walruses (as it well might over a large number of generations) that made it such that this particular male walrus had more Y (male-producing) sperm than X (female-producing) sperm. In other words, the walrus produced sperm that would result in more male offspring than female ones, this gene would spread like wildfire through the described population. Within a few generations, more and more male walruses would have the gene that makes them have more male offspring than female ones, and soon you would get to the 50/50 ratio that we see in the real world. The same argument applies for females: any mutation in a female that caused her to produce more male offspring (though sex is determined by the sperm, not the egg, there are other mechanisms the female might employ to affect the sex ratio) than female ones, would spread quickly in this population, changing the ratio from 10/90 closer to 50/50. Do you see?

No? Well, that’s why I keep urging you to read Dawkins’s book. He is a much better writer, and a much better brain, honestly, than I am, and even if I can’t, I really think that he stands a good chance of winning you over. Get the book. Now!

Here’s a bonus video for your having read this post to the end:

All my previous Monday Musings can be seen here.

Lunar Refractions: Gaudio mihi quod vidi

I’ve just returned from a few short days in my hometown, a very different part of New York than the more urban(e) one I now experience each day. Ten years have passed since I left, and in unconscious recognition of the anniversary I went back to visit the spots that, in their—or my—absence, have come to hold some of my most formative memories.

I returned, laptop in hand and loads of work to do, to the small town settled in 1787 where I grew up walking ten minutes to school with my brother each day, playing AYSO soccer, climbing trees, walking in the woods, building tree and snow forts, and generally having tons of time to pursue whatever it is my vivid imagination might have desired. By my teens, like so many others in villages of under 2,000 inhabitants, in graduating classes of about 125 kids, all I wanted was out—out to the big City, out to the world, out to work, out to life. Being the incredibly lucky girl I am, I was granted my wish, and went off to study one of those subjects that simply wasn’t considered viable, or respectable, or much of anything but marginalized by the nevertheless superb public school system that was all I’d known until then.

Johnsonmwpai_3 But before I took flight, and without really thinking about it (though my parents likely had), I’d spent years filling those delightfully unstructured hours after school with activities that, while important, I had a suspicion would someday have to be demoted to mere hobbies: workshops in painting, drawing, ceramics, and other craftsy stuff. It was with this attitude—the mindset that tells you something is enjoyable yet trivial (at least to the rest of the world)—that I went to my first painting class at the Munson-Williams-Proctor Art Institute in Utica. A ten-minute drive from my own quaint little college town, Utica was a real pit: the traffic lights on the city’s main drag, spaced at what seemed to be one-block intervals, were synchronized for the speed cars moved in the 1940s. The most memorable ads in the paper, even for a little kid, were either for used car dealerships or strip clubs. Graphically, the newest sign for any business looked like it came from the 1950s. Curiously Anglicized Polish and Italian names abounded, with even more curious pronunciations on local radio and television commercials. None of this has really changed much, except for the schools’ installation of metal detectors. The Polish and Italian names are giving way to Bosnian and South American names, accompanied by a growth in barely literate “my-grandpa-learned-English, why-can’t-these-scrappy-people-get-it?” old-timers’ letters to the editor in favor of declaring English an official language, whatever that might mean.

Bishopdddno11948bwmwpaiNot that I intend to get sidetracked: Utica seemed like such a pit to me, moderately privileged kid from a small liberal arts college town, that I just assumed—assigning guilt by association—that anything found there couldn’t possibly lead to much. So when this painting class I took with Ms. De Visser brought us into the museum portion of the Munstitute, as we called it, I didn’t expect to see much on the walls. They told us Philip Johnson designed the building, constructed in 1960, and we just saw it for the brutal, foreboding, windowless box it appeared to be. Pollockno21949_mwpai_2 We were to choose a painting from the permanent collection and copy it in drawing, then choose another to copy in drawing and a more developed painting. Docents made a big deal of Thomas Cole’s Voyage of Life (though a second set is in the National Gallery in DC, the original set is here) and Jackson Pollock’s Number 2, 1949, but I was taken with the obscure, more figurative stuff: an 1888 neoclassical canvas by Francis Davis Millet titled After the Festival, of a wistful young woman in flowing robes with rose-bedeck’d hair, her graceful wrist perched on a tambourine; and a 1948 panel called Double Date Delayed, No. 1, by Isabel Bishop—whoever Millet and Bishop were, whoever Johnson was…. Apparently I indulged my predilection for depictions of missing, absent subjects from an early age.

I came across these two works on Friday aAudubontwocatsfighting1826mwpai_2fter forgetting all about them and the hard work I put into copying them. In the meantime Cole’s other major cycle, Course of Empire, has gained renewed attention thanks to a band named after it and newer works by Ed Ruscha. I took a closer look at the old Voyage of Life again, noting the fairly didactic little changes from one canvas to the next to facilitate viewers’ reading of the fixed narrative—the hourglass on the ship’s prow marks time, vanishing by the last scene, and the sculpted faces on the ship’s side reflect the mood of each season of life—and picking up an amusing little activity booklet for kids to help them decipher devices such as allegory, symbolism, etc. Across from this cycle was a remarkably kitsch, not-so-famous 1826 Audubon, Two Cats Fighting, painted in two days in his Edinburgh studio, quickly enough to toss the rotting carcasses out before the stench of the two cats and their coveted squirrel overcame the space.

Leaving the gallery of Audubon, Cole, and Tiffany silvers, I strolled past Dan Christiansen’s 1968 Draco, predecessor of his more recent works with similar title and gift of Philip Johnson (who’d ever have known…), whose colors echoed those of Pollock’s Number 2 across the way. Warhol’s 1967 purple and silver Big Electric Chair was there, along with David Smith’s 1950Smithletter1950mwpai The Letter and two surprising Gustons—a classic 1975 Table at Night and a surprisingly Shahn-esque, Lam-esque Porch No. 2, dated twenty-eight years earlier, hanging right next to Shahn’s 1958 painting of what seems to be a drowning man, The Parable.

Gustonporchno21947Jenny Holzer’s 1984 truism THE BEGINNING OF THE WAR WILL BE SECRET, acquired in 1993, oh-so-timely then as now, was just about the last thing I’d expect to see here—but, then again, so was a large Louise Bourgeois spider sculpture, along with most of the other works. This surprisingly strong collection was begun in the 1860s and carried on by the two daughters of James Watson Williams and Helen Elizabeth Munson Williams, who both married entrepreneurs interested in collecting, had no children, and hence had loads of money to dedicate to that end. The institute was established in 1919, and the core works were joined by post-war acquisitions. 1940s director Harris Prior worked with colMinervabeyepeacham142lector Edward Root as acquisitions consultant, and Root’s bequests were later joined by those of architect Philip Johnson, Musa Guston (Philip Guston’s wife), and other people who saw art as the keystone of their lives, as it has since become for me.

One of the newest pieces on exhibit was Elaine Reichek’s Sampler I, a cross-stitch done in 2000. Taking Emily Dickinson’s 1862 poem, #640, “I Cannot Live with You,” Reichek crowns that wrenching verse with an image from Henry Peacham’s 1612 book Minerva Britannia. The emblem pictures a weeping eye floating in the sky and the motto Hei mihi quod vidi (“Oh woe is me because I see”). If I’d had my red pen with me, I might’ve made a correction to their catalogue, so it would read Heia, gaudio mihi quod vidi. That sentiment will just have to remain part of the personal catalogue I’ve been amassing for the last few years, begun and unexpectedly enriched in the humble Mohawk Valley of Central New York.

Other Lunar Refractions can be read here.

America and Empire: Thoughts on a Debate

by Alex Cooley

In a recent issue of Foreign Affairs, Alex Motyl posed the question, “Do past empires hold lessons for U.S. foreign policy today?” In a review of two new books (an edited volume by Craig Calhoun and a study by Charles Maier), he concluded that “efforts [to show that they do] yield little payoff.”

To be sure, the use of the term “empire” has become commonplace in descriptions of contemporary U.S. foreign policy. Yet, save for a small group of exceptions, American political scientists until now have mostly neglected the theoretical investigation of empires and the dynamics of imperial relations. Filling a massive gap in the literature, political scientists Dan Nexon and Thomas Wright’s new article “What’s at Stake in the American Empire Debate” published in the May 2007 American Political Science Review is one of the most thought-provoking and policy relevant scholarly articles to appear in recent years in the field. The article builds on much of Nexon’s previous work on the network properties of early-modern empires, but applies these insights to some of the central problems confronting today’s U.S. foreign policy community.

Central to Nexon and Wright’s analysis is the contention that the term “empire” has been stripped of much of its analytical content and, instead, is now used (and over-used) to describe the aggressive or domineering foreign policy actions of the United States. Accordingly, the terms “empire” and “imperial” have become fused with normative connotations about unchecked expansionism. Consequently, policymakers disdain the term’s implications, while they ignore coming to terms with the actual political logics and trade-offs produced by an imperial political order. Former Secretary of Defense Donald Rumsfeld’s now infamous quote that “we don’t do empire” comes to mind, although some now would add the word “well” to the end of that particular musing.

In a theoretical tour-de-force, the Nexon and Wright suspend the current normative use of the term in order to explore the actual structural or network characteristics of imperial systems. In doing so, they upend many of the assumptions of the international relations discipline.

The authors propose that two structural properties distinguish imperial systems from other types of hierarchical political orders such as unipolar systems or hegemonic orders. First, empires are systems where a central power exercises control over peripheral polities indirectly through intermediaries or “informal” rule. Second, these imperial arrangements or contracts differ across a central power’s different peripheries. So the terms of governance of imperial rule over “Periphery A” are not the same as those over peripheries B or C. In this way, the authors distinguish imperial rule from a federal arrangement under which all contracts with subordinate units are equal. In turn, the various peripheries of an imperial system remain segmented or walled off from one another as their relations are mediated and managed by the core power. In visual terms, imperial systems resemble a rimless hub and spokes of a wheel, rimless because the individual peripheries at the end of each spoke remain segmented.

Screenhunter_02_jun_11_1025_2 

This simple network analytic has three major consequences for how international relations theorists should think about systemic dynamics. First, traditional balance-of-power concerns, the supposedly timeless practice of states playing power politics against each other, are supplanted by divide-and-rule dynamics. That is, empires seek to control, manage and extract resources from their peripheries, but must also prevent the formation of collective ties among peripheries that might become the basis for future anti-imperial collective action.

Second, the axis of relevant political relations shifts from interstate relations to intersocietal within peripheries, as imperial centers empower intermediaries to govern local populations according to specific markers of social status and hierarchy. From the center’s perspective, there is a trade-off between controlling the actions of an intermediary (or the “principal-agent” problem), while allowing them to retain local legitimacy as a ruler.

Third, empires face perennial problems of legitimating their control, especially when they try to maintain authority across a wide audience of multiple peripheries that have very different demands and expectations from the center. Thus, a justification of rule to one periphery may completely contradict the rationale given to another. Accordingly, imperial centers must be mindful of their mixed messages or, more precisely, the structural capacity for peripheries to realize they are receiving mixed messages. As the authors observe, “Multivocal signaling is most effective when the two audiences either cannot or do not communicate with one another. (p. 264).” All of these theoretical claims are illustrated with loads of fascinating anecdotes drawn from such seemingly disparate empires as the British East India Company, the Mongols, the Hapsburgs and the Soviet Union.

The “real world” implications of all of this rigorous theorizing are too numerous to summarize, but let me flag four important issues that the Nexon and Wright model raises.

First, this account of international relations provides a more robust account of Cold War politics than traditional balance-of-power accounts that focus just on the inter-state bipolar competition between the United States and the Soviet Union. The Nexon and Wright model explains both great power competition and why the United States and Soviet Union took such pains to intervene in the internal affairs of peripheral countries and reinforce the client status of allies (the Shah in Iran, Marcos in the Philippines, Park in South Korea) by providing them with private goods. Cold War politics exhibited both balance-of-power and divide and rule dynamics, even though the legitimating strategy of the latter was one of systemic anti-Communism.

Second, in terms of the Nexon/Wright model, America’s actual resort to imperial governance since the end of the Cold War has been declining, not increasing. Of course, the post 9/11 military campaigns in Afghanistan and Iraq have drawn more attention to the practice of “American Empire” and the authors acknowledge that “empire”-like qualities of continuing U.S. intervention in these countries. But the more counterintuitive point is that America’s use of overtly imperial systems is actually not as widespread as it was during the 1960s and 1970s.

One major reason for this decline is that globalization – contra the claims of many globalization critics – undermines the conditions necessary for effective imperial management by the center. Today, imperial powers can no longer monopolize globalizing processes such as transnational information flows, media broadcasts, NGO activity and economic exchange. Such transnational flows undercut the informational firewalls that imperial managers have traditionally erected between their peripheries to maintain segmentation and prevent collective action. For example, Al Jazeera and other Middle Eastern cable news networks can instantly and effectively undermine U.S. legitimation strategies regarding its Middle East policy by broadcasting daily images of how America and its political clients routinely disregard human rights concerns and democratic ideals. The contradictions of multivocal legitimation strategies are more quickly, and effectively, exposed in our global era.

The U.S. experience with Uzbekistan is a good case in point. To support its military campaign in Afghanistan, the United States established a military base in southern Uzbekistan in fall 2001 and signed a security cooperation agreement with the Uzbek government. Under its terms, the United States aided and armed Uzbek security services in exchange for basing rights. However, it turned out that the Uzbek government had little interest in adhering to its human rights obligations and used its international support as a partner in the war on terror to eradicate all forms of domestic opposition, not just Islamists with ties to al-Qaeda.

The most egregious of these actions was the so-called Andijan massacre in May 2005 when troops that were dispatched by the Uzbek Ministry of the Interior killed hundreds of demonstrators in the eastern city of Andijan. Following these events, the cross-periphery dissonance between the justifications for promoting democratic state-building in Iraq and Afghanistan, and supporting the heavy-handed actions of the Uzbek regime became an international embarrassment to U.S policymakers. Even William Kristol, editor of the hawkish neo-conservative Weekly Standard, pointed out that the administration’s Uzbek policy simply was no longer compatible with the public logic of America’s involvement in the Middle East or its support of the colored revolutions in the other former Soviet states. In the end, the United States military was shortly after evicted from its military base, as the Uzbek government grew increasingly concerned that U.S. officials were actually intent on fomenting democratic change. Multivocal signaling, in the Uzbek case, delegitimized U.S. relations with the Middle East, but it also lessened American credibility with the Uzbek government.

Again, the analytical point here is not that the United States is hypocritical in the conduct of its external relations; rather, it is that traditional “multivocal” appeals and legitimating strategies can no longer function effectively in a global setting where contradictory justifications cannot be restricted to specific peripheries and political clients.

Third, Nexon and Wright correctly focus our attention on the formal structures of hierarchy operating in the contemporary international system. But we should be aware that states and international actors often employ a mixture of different types of hierarchical organizational forms, some of which require intermediaries and others which are more direct. Thus, Iraqi reconstruction has been delegated to intermediary private corporations and contractors, most of them American, however the U.S. military continues to directly play the major security function within the country. Moreover, some states may also find themselves at the periphery of multiple “imperial centers” and their dictates. The former Communist countries of East Europe may be a case in point, as they implement both the conditions and expectations laid out by the EU while they integrate themselves into the security network of the United States (including its overseas basing network and/or global missile defense shield).

The fourth – and most unsavory – implication of the Nexon/Wright model concerns what an “effective” U.S. strategy would look like in Iraq. The authors convincingly point to some of the control problems that using unreliable intermediaries in Iraq has generated for the United States. But their own analytic scheme points to the broader problem inherent in current U.S. efforts to preserve Iraqi unity while maintaining a political balance among Iraq’s factions in a quasi-democratic setting.

In fact, the extreme implication of the Nexon/Wright model for U.S. policymakers would be to more vigorously pursue “divide-and-rule” policies in Iraq instead of its contradictory nation-building policies of “unite and rule.” This would mean, practically, to empower or even openly arm Shiite and Kurdish factions against the Sunni minority and to license such action in exchange for American patronage. Now, to be clear, the authors do not advocate such a policy (and nor do I), but some others have, drawing upon this structural imperial logic.

All of this points to the fact that U.S. officials have not learned basic lessons from past imperial experiences. Indeed, the very explosion in the recent use of the term “empire” by critics of U.S. foreign policy merely highlights the failure of the U.S. to adequately manage its imperial cross-pressures and multivocal contradictions. Moreover, from this perspective, the rapid erosion of American legitimacy throughout the world – as evidenced in surveys like the Pew Survey on Global Attitudes – should be a much more central concern for the current managers of the American Empire than it was for its discredited recent champions.

Alex Cooley is Assistant Professor of Political Science at Barnard College, author of Logics of Hierarchy (Cornell University Press, 2005), and also served as 3 Quarks Daily’s World Cup 2006 correspondent.

Monday, June 4, 2007

Sir Edward Elgar: Allegro vivace e nobilmente

Australian poet and author Peter Nicholson writes 3 Quarks Daily’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

ElgarThe one hundred and fiftieth anniversary of Edward Elgar’s birth fell on June 2, 2007. This anniversary has been the spur for some strange commentaries in Britain: Elgar’s music isn’t modern enough; its tub-thumping pomp and circumstance state music doesn’t reflect contemporary, multicultural Britain. How symbolic, some say, that Elgar’s face has just disappeared from the twenty pound note to be replaced by that of sensible Scottish economist Adam Smith. 

This is political correctness gone barking. It is sobering to see that true greatness can still be blithely consigned, by some, to an imaginary junk heap of artistic detritus. Sensibly, the general public won’t have a bar of these musings, and no doubt Elgar’s music will continue to be comprehensively celebrated and performed.

Australian artists, for many years on the receiving end of condescension from points north, can relate to Elgar’s long struggle to establish himself as composer and artist in a hostile cultural environment. Anyone who tries this face-to-face on Australians these days is in for a rude surprise, though there really isn’t any cure for parochialism and snobbery, which are without end. However, in some parts hereabouts, there is now, somewhat inevitably, an Australia-first campaign whereby anything English is hammered—they think—into the ground: psychic punishment for past colonial sins of omission. Percy Grainger, for example, is held up a preferable alternative to Elgar. As a republican, I have no desire to linger in antique realms, but I’m not going to be bludgeoned into rejecting greatness when it is there before me. English critics are partly to blame for Elgar’s reputation abroad with their endless references to Elgar’s Englishness. Elgar’s music belongs to the world, not just to England. (There is a specific Australian connection in Elgar’s music. He sets Adam Lindsay Gordon’s poem ‘The Swimmer’ as the last song in Sea Pictures—‘O brave white horses! you gather and gallop, / The storm sprite loosens the gusty reins . . .’) If I stand on Bondi beach on a wild afternoon, hearing the surge of Alassio in my head, or, in mourning, recall that last restatement of the ‘Spirit of Delight’ theme at the end of the Second Symphony (‘the passionate pilgrimage of a soul’), this is not reflux nostalgia. Here is the music equal to the depth of life. If you want, there are certainly plenty of alternatives to choose from.   

Grandeur of spirit, and passion, in art, will never be consigned to a use-by date. Elgar’s story is a remarkable one of persistence through the awfulness of the English class system to the creation of great music, the first Britain had experienced since the time of Purcell. Elgar had a large chip on his shoulder because he, and his wife Alice, had to pay heavy dues in getting to this position of eminence. If, as I read, Elgar tried to wangle a peerage for himself, it was only what he deserved. Such splendour in the Malvern grass—the symphonies, the Violin Concerto, the Cello Concerto, the Enigma Variations, The Dream of Gerontius, Falstaff, the mass of ceremonial, occasional and salon music: all this music speaks of the seriousness and loveliness of the world, often with nobility, sometimes with wistfulness and melancholy.

You don’t have to be Roman Catholic to enjoy The Dream of Gerontius. What sort of mindset is it that can’t enjoy music of this kind because it doesn’t fit the listener’s prescriptive personal agenda. There are people who claim to be living, mentally, always in the present moment. The radical, the cutting edge, are what they crave. When you have the pile of bicycle parts waiting for you at the Whitney Biennial, why go mooning over some Degas? How tedious to sit through hours of Tristan when some hipster rap group is about to let loose at the latest in venue. But what if Elgar or Degas or Wagner are, emotionally and creatively, more radical and cutting edge, than they they have ever begun to conceive. I live only in, and for, the present moment. Too bad if the present moment is dullness enbalmed and then overhyped by the usual organs of capitalist increase.

Thus do some go their weary way, unaware of the marvels about them, forever out of reach because of ideological posturing or just plain ignorance. Well, I’m not forsaking Elgar for any whim of contemporary fashion and, if you don’t know the music of this composer, do yourself a favour that will repay you in kind one hundredfold.

Knowing he had composed a masterpiece, Elgar wrote at the end of the score of Gerontius the following words of Ruskin: ‘This is the best of me; for the rest, I ate, and drank, and slept, loved and hated, like another; my life was as a vapour, and is not; but this I saw and knew; this, if anything of mine, is worth your memory.’ However much these words apply to Gerontius, they also apply to the whole. The grocer’s daughter who became Prime Minister of the United Kingdom once told the nation, in very different circumstances, to rejoice; and we may well say of the piano tuner’s son who became a composer of world renown: rejoice that such a person may triumph, and that such music can be. 

Below the Fold: Is There a Doctor in the House?

Michael Blim

I lost my family doctor last week. Or rather, he has decided to undertake a boutique practice, and I can’t afford the annual fee. So, in a way, he is leaving me, and I am not happy about it.

Last week a friend of mine received a “Dear John” letter from her family physician. He is cutting 400 patients from his list, and he had selected her to be one of them. He is leaving her, and she is not happy either.

My friend and I are medically well connected. Both my partner and my friend work at a major research hospital in Boston. They know lots of doctors who know lots of doctors. And still, both of us are scrambling to find a family physician. Many have closed their practices to new patients, and even a well-placed word doesn’t always unlock their availability.

There must be primary care doctors who are not being overrun by patient demand, but in my quest to find a new doctor, I haven’t come across any with good track records in Boston that are not.

There are a lot of reasons why my friend and I have lost our doctors. First, it would seem pretty obvious that some doctors, particularly primary care doctors, would like to make more money. The average salary for an internist in 2005, according to the US Department of Labor, was $166,000, while surgeons, for instance, made $282,000. My now former family doctor and his new partner plan to serve only 800 patients at a time. As minimum annual membership in the practice costs $3500, this means that they begin each year with $2.8 million in income from their patients that is in addition to insurance reimbursements they will receive for services rendered. Clearly, money is an important motivation.

(For the record, it should be noted that the federal government limits the number of doctors that can be trained, and the medical profession has not founded new medical faculties that could produce more doctors. As the number of doctors has actually declined 17% since 1983, according to the federal government, this would seem to be an odd set of decisions given the rising demand for medical care.)

Second, if we consider the doctor reducing his patient load, money is not likely his concern. He sold his practice some time ago to a large research hospital that covers all office expenses and pays him a very good salary, much more I would guess than the national average. The load-reducing doctor easily earns his keep by providing the hospital with millions of dollars in “billables” through referrals for treatment.

The load-reducing doctor doubtless wants to return some sanity to his professional life, and the same could probably be said for my doctor starting up the boutique practice. My doctor going boutique is doing it with money by increasing his yield per patient, and in effect lowering the demand for his services. He is also no longer subject to the possible productivity demands of hospital overlords who seek in turn to boost hospital profits. He is regaining some professional autonomy, though at a very high price for his patients.

The rub, it seems to me, lies with the load-reducing doctor who has remained with the hospital. It likely makes good sense to slim down his patient list, unless his hospital is paying a per capita bonus, or it pays him for exceeding a goal for the number of patient he sees per year. He probably will not be paid less because of his market value and reputation. His practice has been closed for years, and his remaining patients are likely to cling to him for his competence and for the security of having a family doctor. He will not want for patients, though the hospital might miss some referrals.

In all though, I bet he will barely feel the difference. It might work out a bit better for his patients, whose access to him might marginally increase. But his workload and work routine probably won’t change much, as the remaining patients will take advantage of the opportunity to get more care.

This is because our health is so valuable to us that we will seek as much care possible because we can never tell what is enough. The more opportunities that are offered, the more care we seek. As our demand for care increases, more care is offered.

Here are a few examples of how our consumption of medical care is growing. In 2004, according to the Centers for Disease Control, Americans made 1 billion doctor visits, and the rate of increases in doctor visits is running about three times the rate of our population growth. We made 35 million visits to the hospital in 2004, and the number of stays is growing by about 3% a year.

Services such as diagnostic radiology and commodities such as prescription drugs are growing much faster, both in quantity and cost. Radiology billings are increasing at the rate of 20% a year, hitting $100 billion in 2006. Americans now consume an average of 11 drug prescriptions yearly, and their costs are rising faster than the rate of inflation.

We spent $2 trillion on health care in 2005, a figure that amounts to 16% of our gross domestic product. By 2015, we will be spending $4 trillion a year, thus devoting 20% of our gross domestic product to health care.

We are definitely consuming more health care, even though we cannot determine how much health care is enough. In addition, as the population ages, and our collective health faces more challenges, we are likely to seek more care. And as one national scheme or another covers millions of uninsured Americans, they can finally and fully meet their health needs, and will consume more care. The good news for the currently uninsured, if Medicare is any guide, their health status will improve measurably, as did the health status of the elderly.

Thus, there is surely unmet need, even in the face of galloping demand. This anomaly dissolves once one recognizes that there are many in America who don’t get enough, and many who can’t get enough. For people who just can’t get enough, their desire for more is transforming health care into a “quality experience,” from boutique medical practices to boutique wings of hospitals.

In the past, rank and wealth surely had its privileges in towns and cities where the rich could support society doctors and special hospital accommodations. The new, more numerous class of well paid managers and professionals that has grown up since World War II has recreated American health care as a growth industry which they run and from which they profit. They have pushed the medical profession to provide care as Henry Ford pushed his workers to make Fords. Doctors, once accorded elite status by virtue of their profession, now pursue entrepreneurial projects via boutiques, incorporation and refusing insurance. Many are converting medical practice into a business model.

Even as some doctors rebel, escape, or go out their own in some out of the way place, others like the load-reducer looking for sanity for himself and better service for his patients, are becoming cogs in the wheel of large vertically integrated firms. They refer clients to a capital-intensive medical machine run by managers and doctors with profit-based business plans. Every hospital caught up in the race believes that it will soak up the growing demand by providing an ever growing supply of machines, beds, day surgeries, and importantly innovative cures for the very sick.

As the supply grows, so in turn does demand once more, fed by our unquenchable desire for more health and more well being. We return once again to the question: What is enough? The answer is presently unknowable for three reasons.

First, enough is defined now in terms of differential resources. If you have money or even proxy money such as insurance, private or federal, you can answer the question much more robustly. The lack of money or insurance for others defines in effect what is enough for them. The fact that persons making less than $20,000 a year spend 15% of their income on medical care and those making more than $70,000 spend just 3% of their income on medical care suggests how much more low income individuals are affected by medical costs than those with high incomes. I suspect, though I cannot prove it here, that high income individuals not only are better protected by insurance, but that they have more disposable income to devote to more health care consumption. This last might explain why hospitals are installing those hotel-like hospital wings complete with chefs and concierges, and thus making health care into a “quality experience.”

Second, the private system of medical care today is driven by the profit motive in which expanding our notion of what is enough in part creates greater demand for their products. For them, more is better, particularly as professional medical knowledge and ethics are being subjected to a business model. They can answer for their own interests, but their opinions of necessity are partial, and would probably boil down to the argument that we can never have enough, as health is more generally treated as an immeasurable human good.

Third, self-preservation being keenly desired by many does not necessarily encourage rational choices. To the question of what is enough, for many, the answer is simply the egoistic reply of what is best for them.

Our two practitioners, like us, are struggling to answer the same question in their ethically loosened and bureaucratized world. Their choices – one to ration care by making it expensive, and the other to ration care by eliminating patients – are the unfortunate products of a system incapable of rationalizing itself.

When we think of national health insurance, I believe that we think largely of satisfying the unmet demands of a near majority of Americans for quality medical care. However, we often fail to realize that any national health system will come to pass as a descendant of the one we have now, one in which the question of what is enough is answered by an anxious, insatiable demand for care in an environment of relative indifference to the needs of others. As so often happens in America, it is hard to note the suffering of others at the same time resources are expanding for the things that other individuals value.

A new national system must not only provide medical care equitably for the first time in American history, but it also must develop a collective answer to the question of what is enough. It involves answering the practical question of how much of our national income we want to devote to medical care as against other goods and services. It also involves re-setting the moral terrain through collective agreement based upon an ongoing investigation of what care is necessary for a decent life.

Random Walks: Tunnel Vision

Titanic_ver2James Cameron’s 1997 blockbuster movie Titanic broke box office records and garnered bushels of awards; it remains one of the top-grossing films of all times. A large part of its appeal lay in the central (fictional) story of the doomed young lovers, London socialite Rose DeWitt Decatur (Kate Winslet) and impoverished American artist Jack Dawson (Leonardo DiCaprio), who ultimately sacrifices his own life to save hers. A good tragic love story is a time-tested recipe for cinematic success.

But even people whose heartstrings remained untugged by the tearjerker tale couldn’t help but be entranced at the spectacle of the great Titanic brought to life on the big screen, and the lavish re-enactment of its tragic sinking makes for stellar cinema. There’s just something about the Titanic that resonates with us on a deep, subconscious level, and it is that element that ultimately raises Cameron’s film above mere Hollywood bathos.

It’s tough to put one’s finger precisely on just what that “something” is, but sci-fi author Connie Willis managed to do just that in her 2001 novel, Passage, which I recently re-read for the umpteenth time while recovering from a virus. I’ve been a Willis fan for years, ever since I read her short stories, “At the Rialto,” and “Even the Queen” (included in the collection Imaginary Things). She has the skeptical mind of a scientist — her husband is a physicist, which probably helps — and the soul of poet, mixing science fact, science fiction, literary allusion, and metaphor with memorable characters and terrific story-telling. Her broad popular appeal is evidenced by her winning nine Hugo Awards and six Nebula Awards (thus far), making her “one of the most honored sci-fi writers of the 1980s and 1990s.”

Willis has tackled time travel, chaos theory, and the sociology of fads (Doomsday Book, To Say Nothing of the Dog, Bellwether), but in Passage, she immerses herself in what is for many the Ultimate Question: is there life after death, a part of our consciousness — a soul, if you will — that continues even after the body dies? And to explore her theme, she delineates the science of Near-Death Experiences (NDEs): the proverbial light at the end of the tunnel, at least if the estimated 7 million people who claim to have experienced an NDE are to be believed. Written accounts of NDEs date back some two thousand years,and hail from all over the world. There’s even a research foundation devoted to the study of NDEs.

The man responsible for coining the term “near-death experience” is Raymond Moody, an MD who has written several books on the afterlife based on patient testimonials,  who believes NDEs are evidence of  a soul (consciousness that exists separately from the brain), and evidence of the existence of an afterlife. He’s even boiled the typical NDE down to a few key features. First, there’s a strange kind of noise, alternately described as a ringing or a buzzing. There is a sense of blissful peace, and often an out-of-body experience (feeling as if one is floating above one’s body and observing it from that vantage point). There’s that light at the end of the tunnel, being met by loved ones, angels, or other religious figures, and a kind of “life review” — seeing one’s life flash before one’s eyes. But as the Skeptic’s Dictionary helpfully points out, Moody’s books ignore the fact that as many as 15% of NDEs are outright hellish experiences. Neardeathexperience1

It’s not all mystical New Age spiritualism, or even overt religiosity. There have been some solid scientific studies of what happens to the brain during such events, most notably a 2001 Dutch study published in the prestigious British medical journal, The Lancet. The researchers examined 344 patients who were resuscitated after suffering cardiac arrest, and interviewed them within a week afterwards about what — if anything — they remembered. The results were a bit startling: about 18% reported being able to recall some portion of what happened when they were clinically dead, and between 8 and 12 percent said they experienced some form of an NDE.

Neurochemistry offers some convincing alternative explanations, namely, that NDEs aren’t evidence of an afterlife, but illusions created by a dying (oxygen deprived) brain. Cardiac arrest and the anesthesias used in ERs are capable of triggering NDE-like brain states. The Dutch researchers found that “similar experiences can be induced through electrical stimulation of the temporal lobe,” for instance, as can neurochemicals such as endorphins and serotonin, and hallucinogenic drugs like LSD and mescaline. Last October, an article in New Scientist described a new theory by University of Kentucky neurophysiologist Kevin Nelson attributing NDEs to a kind of “REM intrusion” : “Elements of NDE bear uncanny similarity to the REM state,” he told the magazine. He describes REM intrusion as “a glitch in the brain’s circuitry that, in times of extreme stress, may flip it into a mixed state of awareness where is is both in REM sleep and partially awake at the same time.” Something similar might be happening with NDEs, he reasons, although the jury is still out on that particular hypothesis.

Karl Jansen has managed to induce NDEs with ketamine, a hallucenogenic related to PCP, but far less destructive; it’s an anesthetic that works not just by dulling pain, but by creating a dissociative state. According to Jansen, the conditions that give rise to NDEs — low oxygen, low blood flow, low blood sugar, and so forth — can kill brain cells, and the brain often responds by triggering a flood of chemicals very similar to ketamine to protect those cells, which would produce “out of body” sensations and possibly even hallucinations. Jansen claims his approach can reproduce all the main elements Moody attributes to NDEs: the dark tunnel with a light at the end, out of body experiences, strange noises, communing with god, and so on.

Why do so many people see a light at the end of the tunnel? Susan Blackmore, a psychology professor at the University of the West of England in Bristol, thinks she might have an explanation: neural noise. During cardiac arrest, in the throes of death, the brain is deprived of oxygen, causing brain cells to fire rapidly and quite randomly in the visual cortex. There are lots of cells firing in the middle, and fewer towards the outer edge, producing white light in the center fading into dark at the outer edges. That feeling of peace and well-being might be due to the fact that the brain is pumping out endorphins in response to pain, which can produce a dream-like state of euphoria. That same cerebral anoxia might also cause the strange buzzing or ringing sound people claim to hear when they enter an NDE.

Willis takes that tiny bit of factual thread and spins it into a complex scientific mystery, skewering cheap spiritualism in the bargain. She got the idea for the book when a friend insisted she read a book on NDEs, insisting she would find it inspiring. Instead, Willis loathed it. In fact, it made her angry: “I thought it was not only pseudoscience, but absolutely wicked in the way it preyed on people’s hopes and fears of death, telling them comforting fictions.” She channeled that anger into the novel, creating the character of Dr. Maurice Mandrake, a physician who has abandoned all pretense of scientific objectivity, prompting his “witnesses” to say what he expects to hear — a strategy that has made him a best-selling author of just the sort of book that triggered Willis’ creative rage. (The savvy reader will note some striking likenesses to Moody.)

Mandrake’s foil in the novel is the main protagonist, Joanna Lander, also a doctor studying NDEs, but her approach is far more rational, firmly grounded in the scientific method — which puts her at odds with Mandrake and his minions. She finds an ally (and potential love interest) in neuroscientist Richard Wright, who has contrived a way to induce NDEs using a psychoactive drug called dithetamine. (I guess one could say he’s the fictional counterpart to Jansen.) His theory is that the NDE is a survival mechanism, part of a series of strategies the brain employs whenever the body is seriously injured. The NDE is a side effect of neurochamical events.

While NDEs seem to have some striking similarities in the various recorded accounts, what specific form the NDE will take depends on the individual, and for Willis, this provides an opportunity for a clever twist. Recall the now-famous final scene in Cameron’s Titanic, when the elderly Rose, having lived a long, full life, dies quietly in her sleep. The camera follows her “soul’s” journey out of her body, towards a white light, then down into the ocean depths until she reaches the present-day wreck. As the camera moves into the main dining hall, we see the shipwreck morph back into its former unsunken glory, and all those who perished are on hand to welcome Rose back to the fold. Waiting at the top of the grand staircase is Jack himself, who takes Rose’s hand, her youth restored, and the lovers are reunited in eternity. (Cue violins and a thousand sobbing fans, followed by that overblown — and overplayed — Celine Dion ballad.)

But what if Rose hadn’t willingly gone into the light? Then she may have popped back to her reality as an elderly woman  in the middle of the ocean, on a complimentary junket to re-visit the sunken ship because she alone might know the secret of the fabulous lost diamond necklace. And her “vision” would have classified as a classic NDE. Joanna would definitely have wanted to interview her. Not only would Rose have made an excellent witness, but the two share a common NDE framework. When Joanna consents to let Richard induce NDEs in herself as a subject, she also finds herself on the Titanic — the very day of its sinking. Titanic_bow_railing

Of course, it isn’t really the Titanic: she knows that, even though the experience feels uncannily real, nothing like a typical dream state. But Joanna is convinced there’s a reason her subconscious has picked this particular framework in which to place her NDE. The Titanic is the perfect metaphor for what is happening as the brain strives to make sense of things even as it is dying (or pseudo-dying, in the case of induced NDEs). The body is a sinking ship, the chemical signals and electrical impulses are SOS messages trying to find some form of rescue, some way of jump-starting the body before brain death sets in, within four to six minutes after the onset of oxygen deprivation. The metaphor of the Titanic, says Joanna’s former English teacher, is “the very mirror image of death.”

Willis knows a little something about death, having lost her mother quite suddenly when she was 12, an experience she has described as “a knife that cut across my life and chopped it in two. Everything changed.” But she didn’t take refuge in the popular Hallmark sentiments that often pass for profundity in this country. [“[O]ur American culture is especially in denial about death,” she said in a 2003 interview, and while writing Passage, “I wanted to make sure some reader who had just had somebody die… would say, ‘Thank you for telling the truth and trying to help me understand this whole process.'”

And what a brutal truth it is. The novel ends by taking us right into the dying brain of one of the characters, quite conscious and very aware of what these visions represent: synapses firing randomly, memories falling away, even language being lost, before the visual cortex shuts down completely. The depiction is unflinching, and all the more powerful because it never resorts to easy platitudes. It’s not an easy book to read; it’s decidedly unsettling, yet I return to it again and again. Maybe it’s because Willis stops short of letting everything go dark; in the end, she acknowledges that perhaps some things cannot  — and need not — be known. By leaving things open-ended, Passage reassures us that it’s okay for this to remain unknown. Our only job, as human beings, is to make the life journey as rich and meaningful as possible — in whatever way we choose to do so.

It’s safe to say that a vast majority of the world’s population professes to believe in some form of life after death, even if the form it takes differs from culture to culture; it’s far more comforting to them. No doubt many of those resist any attempt by scientists to explain their experiences from a rational perspective. Like Maurice Mandrake and his fawning acolytes in Willis’ novel, they want their mystical/spiritual trappings, and consider alternative scientific explanations to be a kind of cheapening of such beliefs and experiences.

Even before I became a science writer, I never understood that attitude. How much more magical and wondrous to discover that there is a rational reason why we experience such things. “[P]eople are trying to validate their experience by making these paranormal claims, but you don’t need to do that,” Blackmore told ABC News when the Lancet study was published in 2002. “They’re valid experiences in themselves, only they’re happening in the brain and not in the world out there.”

When not taking random walks at 3 Quarks Daily, Jennifer Ouellette blogs about science and culture at Cocktail Party Physics. Her most recent book is The Physics of the Buffyverse (Penguin, 2007).

Blair’s Legacy and Brown’s Prospects

by Matthias Matthijs

Tony_blair_24_350x470The curtain is finally falling over Tony Blair’s tenure in 10 Downing Street . After ten years as prime minister, Blair will step down on June 27. He will be succeeded by his stoic chancellor, Gordon Brown, although many Labour Party faithful are far from enthusiastic about the pending “coronation.” Gordon_brown_deputy_prime_ministerAfter almost ten years of Blair in Number 10, there is a widespread sense of disappointment – across the political spectrum – with the man who promised to create a “new Britain” and pledged to forever change the face of British society. On the left, there is frustration with New Labour’s slow progress in poverty reduction and the unacceptable levels of income inequality; while on the right there is criticism for excessive micro management and endless tinkering with the tax system, which supposedly harms the country’s international competitiveness. An overwhelming majority of the British population disapproved of his foreign policy, seen as too close to the Bush administration and often actively harming fundamental British interests.

What started out as a potentially brilliant premiership in 1997 was halted by excessive caution in domestic politics during the first term and his controversial decision to join US forces in what most Britons saw as an illegal war in Iraq during his second term. Where Thatcher was able to fuel nationalistic fervor and pride in 1982, and mask her initial failure in economic policy with a resounding military victory over Argentina in the Falklands; Blair’s disastrous adventure in Iraq would soon overshadow his successes in the domestic economy. History will judge Blair as a rather weak consolidator of Thatcherism, both in domestic and foreign policy. During his first term, most of Thatcher’s economic reforms were made all but irreversible by formally institutionalizing them (Bank of England independence forever banished the goal of full employment to the dustbin of history, and Brown’s “golden rule” made a virtue of fiscal austerity). Market mechanisms have been infused in all parts of government, something even Thatcher would not have dreamt of. In foreign affairs, Blair’s knee-jerk reaction to relations with Europe and America has been the same as Thatcher’s. However, Thatcher was much more independent from Washington than Blair, more ready to defend the plight of the Palestinians and criticize Israeli policy if she felt it justified. And whatever has been said about the Falklands, it could be defended in terms of the British national interest, which is not the case for Blair’s participation in the Iraq war.

Given the vague ideas of New Labour and Blair’s internal Party reforms while in opposition, one should probably not be so surprised that Blair and Brown acted the way they did once in power. It was never their intention to attack the emerging Thatcherite consensus in economic policy – even though they could have made the case and definitely had the overwhelming majority to do so – most of which they saw as inevitable in an increasingly globalized world. From the very beginning, Blair’s goals in government were much more modest than Thatcher’s were in 1979, and consisted in merely tweaking what he saw as a fundamentally sound economic system inherited from Major and Clarke. The fact that New Labour had such a large majority in the House of Commons, the widespread disillusion with the Tories after eighteen years in power, together with the ceaseless rhetoric and promise of “a New Settlement for a New Britain,” created too high hopes and unrealistic expectations – especially on the left – for the incoming government.

After ten years in power, it is clear that Blair is not in the same league as Attlee or Thatcher, and will never be seen as a politician who “changed the weather.” Maybe he was unlucky in this respect. Both Attlee and Thatcher came to power supported by a changing tide of the dominant ideology. It should have been clear from the start that The Third Way – or at least the interpretation given to it by New Labour – was ideationally too close to Thatcherism to create widespread enthusiasm and significant change. In the end, Mrs. Thatcher herself is perhaps the best judge. During her 81st birthday party in London last October, a guest asked her what she thought of new Tory leader David Cameron. With a smile, she answered: “I still prefer Tony Blair.”

But what are the prospects for Gordon Brown? Maybe he is secretly hoping that he can repeat the stunt of John Major in the general election of 1992. He is wrong for two reasons. Firstly, Major – for all his shortcomings – was a relatively new face which Britain seemed to trust. He had only been in high politics for about a year and a half when he got the top job. Brown is as much – if not more – associated with New Labour as Tony Blair. Secondly, the opinion polls in October 1990 showed that 34.3% intended on voting Conservative at the next general election against 46.4% for Labour. Just two months later, in December 1990, after the defenestration of Margaret Thatcher, 44.6% of the electorate told Gallup they would vote Tory compared to 39.1% for Labour. It is unlikely that Brown can bring about the same dramatic change in public perceptions. One recent poll by The Guardian gives Cameron’s Conservatives a lead of 10 points if Brown were to face an election as Prime Minister. The similarities with Al Gore in 2000 become more striking by the day.

The author is a Professorial Lecturer in Economics at Johns Hopkins University in Washington, DC.