The Top 40 Picks of the Tribeca Film Festival

From The Village Voice:Tribecaopen

Conceived in the shadow of no towers, the Tribeca Film Festival was the first 9-11 memorial, and surely the most upbeat. The fifth edition acknowledges its roots—opening with the movie everyone I know is afraid to see, the quasi-real-time United 93. At least two documentaries evoke that epoch-defining day, and there are many more on the Bush wars, not to mention the fictional disaster movie Poseidon and the presumably mega-violent secret-agent flick Mission: Impossible III.

What have Robert De Niro and his producer Jane Rosenthal wrought? From the perspective of its founders, Tribeca has been a mild boon to neighborhood restaurants and magnificent advertisement for American Express. The festival is a triumph of branding, but has it found its niche? Like the city it celebrates, Tribeca has proven resilient, but like New York, it’s far too sprawling and abrasive to ever attain the grooviness of SXSW or the exclusivity of Telluride. Marketing—yes. Market—we’ll see. Tribeca is very far from rivaling Sundance (or Toronto) as the place at which to sell or launch a movie. True, Oscar nominee Transamerica did have its premiere at the last festival—but only God and Harvey Weinstein know if the Weinstein brothers weren’t already planning to make that acquisition. (Other recent releases that found distributors at Tribeca include 4 and Ushpizin; The Power of Nightmares, The Beat That My Heart Skipped, and Night Watch were local premieres.)

More here.



Finches Provide Answer to Another Evolutionary Riddle

From Scientific American:Finches

Spring is the season for flashy mates, at least for finches. It is only later in the year that the females choose based on genetic diversity, according to new research from two scientists at the University of Arizona. Their 10-year study of a colony of 12,000 finches in Montana has revealed the seasonal dynamics of finch attraction and thereby resolved an evolutionary conundrum. Previous research had shown that female birds go for the most resplendent mates; in the case of finches, this means the males with the reddest breast.

By analyzing genetic records collected over 10 years, researchers found that early in the mating season, females chose the male finch with the reddest breast. But as the season wore on–and new females entered the charm–they typically chose males with strong genetic differences from themselves. And those tempted to stray typically chose a mate more genetically different than their regular partner, according to the research presented in Proceedings of the Royal Society B.

More here.

Tuesday, April 18, 2006

‘When it comes to facts, and explanations of facts, science is the only game in town’

Daniel Dennett is the “good cop” among religion’s critics (Richard Dawkins is the bad cop), but he still makes people angry. Sholto Byrnes met him “That’s one of my favourite phrases in the book,” says Daniel Dennett, his huge bearded frame snapping out of postprandial languor at the thought of it: “If you have to hoodwink your children to ensure that they confirm their faith when they are adults, your faith ought to go extinct.” The 64-year-old Tufts University professor is amiable of aspect, but the reception he has had while in Britain promoting his new book, Breaking the Spell: religion as a natural phenomenon, has not been uniformly friendly. His development of the theory that religion has developed as an evolutionary “meme”, a cultural replicator which may or may not have a benign effect on those who transmit it, has drawn attacks, not least in these pages, where John Gray accused him of “a relentless, simple-minded cleverness that precludes anything like profundity”.

more from the New Statesman here.

paracelsus

Story1_1

Nerd. Geek. Poindexter. The classmate with the taped-together glasses, pocket protector and bad haircut; the subway passenger with the abstracted gaze and “The Very Best of the Feynman Lectures” playing on her iPod; the professor with chalk dust on his coat, mismatched socks and a Nobel in his future. The image of the kooky, bedraggled scientist — wide-eyed Einstein with his mad corona of white hair, sticking out his tongue — is so ingrained in the collective imagination that it’s come to resemble a veritable cartoon.

In Philip Ball’s deeply weird and wonderful new book, “The Devil’s Doctor,” the man who might well be the prototype for that familiar mad-scientist figure — the 16th century alchemist and epic wanderer Paracelsus — neatly escapes the caricaturist’s frame and emerges exuberantly and combatively alive. Hardly a hagiography, the book (subtitled, enticingly, “Paracelsus and the World of Renaissance Magic and Science”) rescues from obscurity a man who, Ball argues, was a flesh-and-blood hinge between the medieval and the modern universe.

more from Salon.com Books here.

new john ashbery poem

On Seeing an Old Copy of Vogue on a Chair

For all I know I was meant to be one of those marchers
into a microtonal near-future whose pile has worn away—
the others, whose drab histrionics provoke unease to this day,
so fair, so calm, a gift from cartoon characters I loved.
Alas, the happy ending and the tragic are alike doomed;
better to enter where the door is held open for you
with scarcely a soupçon of complaint, like salt in stew
or polite booing at a concert he took you to.

No longer shall the grasses weave quilts for our revenge
of lying down on, or a faint breeze stir milady’s bangs.
What is attested is attested to. To flirt with other thangs,
peacockish, would scare the road away.

Frogs give notice when the swamp backs up, and butterflies
aren’t obliged to stay longer than they do.
Look, they’re already gone!
And somewhere, somebody’s breakfast is on exhibit.

from the Paris Review.

In Heart Disease, the Focus Shifts to Women

From The New York Times:Heart1

¶Women with chest pain and other heart symptoms are more likely than men to have clear coronary arteries when tests are performed, a surprising result that suggests there may be another cause for their problems.

¶When women do have blocked coronary arteries, they tend to be older than men with similar blockages and to have worse symptoms, including more chest pain and disability. These women are also more likely to have other problems like high blood pressure, high cholesterol and diabetes, which may make surgery riskier. And they are more likely than men to develop heart failure, a weakening of the heart muscle that can be debilitating and ultimately fatal.

¶When women have bypass surgery or balloon procedures for coronary blockages, they are less likely than men to have successful outcomes, and they are more likely to suffer from bad side effects.

¶Blood tests that reliably pick up signs of heart damage in men do not always work in women.

¶Women seem much more likely than men to develop a rare, temporary type of heart failure in response to severe emotional stress.

“We don’t have good explanations for these gender differences,” said Dr. Alice K. Jacobs, a cardiologist at Boston University.

More here.

New pathogenic bacterium pinpointed

From Nature:Bacterium

Scientists have discovered a previously unknown bacterium lurking in human lymph nodes, a finding that suggests there are many more disease-causing bacteria still to be discovered. The bacterium is thought to cause chronic infections in patients with a rare immune disorder called chronic granulomatous disease (CGD), and the research team is now investigating whether it might be involved in conditions that are more common, such as irritable bowel syndrome. Researchers know only a fraction of the bacteria that inhabit the water, air and our bodies, because most of them are impossible to grow and identify in the lab. Even when bacteria are suspected as the cause of a disease, it can be extremely difficult to pin down the exact culprit. The digestive disorder Crohn’s disease, for example, may be partly caused by bacteria. But researchers have been unable to isolate the bugs that are to blame.

More here.

Monday, April 17, 2006

Lunar Refractions: “Our Biggest Competitor is Silence”

I really wish I had the name of the Muzak marketer who provided this quote as it appeared in the 10 April issue of the New Yorker magazine. Silence is one of my dearest, rarest, companions, and this marketer unexpectedly emphasized its power by crediting it as the corporation’s chief competitor—no small role for such a subtle thing.

My initial, instinctual, and naturally negative reply was that, though this claim might be comforting to some, it’s also dead wrong. In most places, silence lost the battle long ago. A common strain that now unites what were once very disparate places and cultures seems to be the increasing endangerment—and in some cases extinction—of silence. I think about this a lot, especially living in a place where for much of the day loud trucks idle at length below my apartment, providing an aggravating background hum that I’ve never quite managed to relegate to the background. I lost fifteen minutes the other day fuming about the cacophonous chorus of car alarm, cement truck, and blaring car radio that overpowered any defense my thin windows might lamely try to muffle it with, not to mention the work I was trying to concentrate on. I’d buy earplugs, but noise of this caliber is also a physical, pounding presence. I admit that this sensitivity is my own to deal with, but something makes me doubt I’m alone in New York; in certain neighborhoods, and often outside of a hospital, there are several signs posted along the street, “Unnecessary Noise Prohibited.” I wonder who defines the term unnecessary, and how. Other signs warn drivers that honking the car horn in certain areas can be punished with hefty fines. A couple of years ago the same magazine cited above ran a piece—I believe it was in the Talk of the Town section—covering a local activist working to ban loud car alarms. Since silent alarms are now readily available, and have proven more effective, there really is no need for these shrill alarms. My absolute favorite ones are those set off by the noise of a passing truck, just as one apartment-dweller might crank up the volume on the stereo to drown out a neighbor’s noise. Aural inflation runs rampant.

But the comment of the Muzak marketer wasn’t enough to get me to set fingers to keyboard; what finally did it was a day-hike I took in the hills of the upper Hudson valley on Easter Sunday. I almost thought twice about escaping the city on this holiday, since—no matter how agnostic, multicultural, or 24/7 this city might be—such days always bring a rare calm. For just a few precious hours we’re spared the sound of garbage trucks carrying our trash away from us while replacing it with a different sort of pollution, and spared many other noisy byproducts of our so-called progress. As I was walking through the woods, a wind kicked up, rustling the leaves packed down by winter snow, and I was reminded of just how loud the sound of wind through bare tree branches overhead can be. Most people would probably say that wind in trees is quieter, and less disturbing, than more urban sounds, but I was reminded yesterday that that isn’t always the case.

Manetsilence_1So I set out to briefly investigate silence—why some people can’t seem to find any, why so many do everything in their power rid themselves of it, and why many just don’t seem to give it any thought, unobtrusive as it is. It has played a major role in many religions, from the tower of silence of Persian Zoroastrianism to the Trappist monks’ vows of silence; one could speculate, in a cursory way, that the rise of secular culture was accompanied by a rise in volume. I came across a curious coincidence while checking out the etchings of Manet recently that would support such a conclusion. While the painter of Olympia has often been called the least religious of painters, an etching of his done around 1860 (in the print collection of the New York Public Library) portrays a monk, tablet or book in hand and finger held to lips, with the word Silentium scrawled below. Given the connotative relationship between silence and omission, obilivion, and death, Manet’s etching has interesting implications for both silence and religion as they were seen in nineteenth-century Paris. If not secularization, perhaps industrialization ratcheted everything up a few decibels.

Silence—of both good and bad sorts—runs through everything, leaving traces throughout many languages. There are silent films, which exist only thanks to a former lack of technology, and were usually accompanied by live music. Some people’s ideal mate is a classic man of the strong, silent type—adjectives never jointly applied to a woman. A silentiary is (well, was, since I doubt many people go into such a line of work nowadays) a confidant, counselor, or official who maintains silence and order. Cones of silence appear in politics, radar technology, nineteen-fifties and sixties television shows, and science fiction novels. After twenty years of creating marvelous music out of what could be derogatively deemed noise, the band Einstürzende Neubauten came out with both a song and album titled “Silence is Sexy.” Early on the band’s drummer, Andrew Chudy, adopted the name N. U. Unruh—a wild play on words that can be connected to a German expressionist poet and playwright, a piece of timekeeping equipment, and, aptly, a riff on the theme of disquiet or unrest.

LeopardiGetting back to my stroll in the woods, when considering the peace and quiet of a holiday I inevitably turn to poet Giacomo Leopardi’s songs in verse. His thirteenth canto (“La sera del dì di festa,” “The Evening of the Holiday”), laments the sad, weighty quietness left after a highly anticipated holiday. The falling into silence of a street song at the end is a death knell for the past festivities. In keeping with this, his twenty-fifth canto (“Il sabato del villaggio,” “Saturday Night in the Village”) praises Saturday’s energetic sounds of labor in preparation for the Sunday holiday, saving only melancholy words for the day of rest itself and its accompanying quiet. I don’t wish to summarize his rich and very specific work, so encourage you to have a look at it for yourself. The fact that these were written across an ocean and over a century ago attests to the fact that silence is not golden for everyone. Were he to live today, Leopardi might well be one of the iPod-equipped masses.

When I found that Leopardi’s opinion differed from my own, I looked to another trustworthy poet for a little support in favor of my own exasperation. Rainer Maria Rilke, in his famous fifth letter to the less famous young poet, written in the autumn of 1903, is evidently dependant on silence:

“… I don’t like to write letters while I am traveling, because for letter writing I need more than the most necessary tools: some silence and solitude and a not too familiar hour…. I am still living in the city… but in a few weeks I will move into a quiet, simple room, an old summerhouse, which lies lost deep in a large park, hidden from the city, from its noises and incidents. There I will live all winter and enjoy the great silence, from which I expect the gift of happy, work-filled hours….”

SenecapergamonmuseumTo break the tie set by Leopardi and Rilke, I turned to another old friend for comfort, and was surprised to find none. Seneca, in his fifty-sixth letter to Lucilius, asserts that it is the placation of one’s passions, not external silence, that gives true quiet:

“May I die if silence is as necessary as it would seem for concentration and study. Look, I am surrounded on every side by a beastly ruckus…. ‘You’re a man of steel, or you’re deaf,’ you will tell me, ‘if you don’t go crazy among so many different, dissonant noises…’. Everything outside of me might just as well be in an uproar, as long as there is no tumult within, and as long as desire and fear, greed and luxury don’t fight amongst themselves. The idea that the entire neighborhood be silent is useless if passions quake within us.”

In this letter he lists the noises that accompany him on a daily basis: the din of passing horse-drawn carriages, port sounds, industrial sounds (albeit those of the first century), neighborhood ball players, singing barbers, numerous shouting street vendors, and even people “who like to hear their own voices as they bathe.” It sounds as though he’s writing from the average non-luxury apartment of today’s cities. His point that what’s important is interior calm, not exterior quiet, exposed my foolishness.

À propos of Seneca and serenity, a friend of mine recently bought an iPod. A year ago we had a wonderful conversation where she offered up her usual, very insightful criticisms of North American culture: “What is wrong with this country? Everyone has a f****** iPod, but so few people have health insurance! Why doesn’t anyone rebel, or even seem to care?” As I walked up to meet her a couple of weeks ago I spotted from afar the trademark white wires running to each ear. “I love this thing. I mean, sure, I don’t think at all anymore, but it’s great!” To say that this brilliant woman doesn’t think anymore is crossing the line, but it’s the perfect hyperbole that nears the truth; if you can fill your ears with constant diversion, emptying the brain is indeed easier. The question, then, is what companies like Muzak and their clients can then proceed to fill our minds with if we’re subject to their sounds.

This relates to the ancient sense of otium as well—Seneca’s idea that creativity and thought need space, room, or an empty place and time in which to truly develop. Simply defining it as leisure time or idleness neglects its constructive nature. The idea that, when left at rest, the mind finds or creates inspiration for itself, and from that develops critical thought, is key to why I take issue with all this constructed, mass-marketed sound and “audio architecture.” While it might seem that an atmosphere filled with different stimuli and sounds would spark greater movement, both mental and physical, I think we’ve reached the point where that seeming activity is just that—an appearance, and one that sometimes hides a great void.

AlicecooperIn closing, for those interested, we may finally be able to give credit to the Muzak marketer who inspired me. On Tuesday, 18 April, John Schaefer will discuss Muzak on WNYC’s Soundcheck. In the meantime, I’ll leave you with a gem from the September 1969 issue of Poppin magazine. In music critic Mike Quigley’s interview with Alice Cooper, the latter discussed what he’s looking for between himself and the audience: “If it’s total freedom, I guess the ultimate thing you can go into is total silence between the audience and performer, with the performer projecting something he doesn’t even have to play. A total silence trip is the ultimate.” Even Muzak can’t counter that.

Dispatches: Flaubert and the Anxiety of Inheritance

In yesterday’s New York Times Book Review, James Wood reviews the new Flaubert biography.  It’s a natural call, because Wood sees Flaubert as a hinge figure for the development of ‘self-consciousness’ in literature (more on this below), and because of Wood’s official (i.e. disputed) status as the last true literary critic.  Flaubert’s reputation matches up here quite well: the supreme stylist; the dogged aesthete; the urbane man of letters; the tireless reader and writer; the champion of aesthetic autonomy; the first diagnostician of our modern dilemma – Flaubert was born to die, to make way for his own legend.  That said, to make an invidious historical comparison, Wood’s style is far more self-consciously literary and concerned to brandish tropes than Flaubert’s ever was: ‘dipped in futility,’ ‘the great pool of death,’ ‘a long siege on his talent.’  Where the air of death surrounds Flaubert at this juncture in the history of reading, Wood’s analyses of literary style in the pages of The New Republic, The New Yorker, the London Review of Books, etc., give off the less powerful aroma of anachronism.  As n+1 so cattily remarked, Wood seems to want to be his own grandfather.

In a larger way, a funereal atmosphere seems to hover over the entire present ‘literary world,’ consisting of ten or so literary magazines, the review pages of a few newspapers, the populations of graduate creative writing programs, and that class of rich-in-cultural capital people who find it important to read, say, The Corrections, to remain ‘part of the conversation.’  I think members of that version of literary culture represent themselves wrongly as the sole defenders of the realm, and that the dour pronouncements they make about the state of literature are narrow and misguided.  The death certificate can’t quite decide which is the primary cause: the hateful mass market, the decline of reading, the rise of movies, the rise of video games, the loss of some essential seriousness, the inadequate stewardship of ‘our’ culture.  (And just whose culture is it over which one feels a sense of ownership?)  The stance is one of bemused detachment at this fallen world we live in, combined with an an unspoken assumption that literature and not movies or music is the true culture, and an exaggerated respect for the cultural achievements of the novelists of the mid-nineteenth to mid-twentieth centuries.  Nostalgia for the literary accomplishments of prior eras I understand – what interests and confuses me is the rhetoric of ‘dying literature,’ ‘the last critic,’ etc.

And why Wood?  The general trajectory one can extract from his writing is a fairly hoary narrative about how novels achieved self-consiousness in fits and spurts beginning roughly with Austen, truly emerging with Flaubert, and peaking with Henry James and Virginia Woolf.  I don’t entirely disagree with his thumbnail, but the exclusivity of this narrative is unwarranted.  First of all, self-consicousness, however you define that, is far from unidentifiable in the novels of Sterne, or Fielding, or, for that matter, Cervantes.  Second, the progression of literary styles from realism to modernism in the novel is a compelling story, but only one among tens of such narratives in comparatist literary history.  Why not erect the development of prose nonfiction in eighteenth-century periodicals as the crucible of modernity, or the egotistical sublime of the Romantic poets, or really go out on a limb and advocate for Shakespeare?  The question, then, is not so much with Wood’s particular but unremarkable story of past greatness, as with the enshrinement of that story, and of Wood as a figure, as melancholic touchstones for our dissatisfaction with the state of the world today.

My hypothesis is that the exaggerated mourning for lost cultural greatness is a strangely self-deluded form of wielding authority.  That is, the bemoaning of literature’s lack of importance today, of the dearth of ‘serious’ (another keyword) readers, is mostly emitted by people who are, paradoxically, both the most widely read and the most self-abnegating of belle lettrists.  What Wood and Franzen and The Believer and even n+1 share is that sense of coming at the fag-end of a period.  They are our cultural coroners, except I don’t think culture is dying.  As with Harper’s magazine’s shrill doomsaying, their real complaint is of their own insufficient authority.  As designated hitters for what counts as literature in U.S. culture, they wield considerable influence and even function as a coterie at times.  But the nostalgia for an imagined golden age tells me something else: that they believe that the culture-at-large stubbornly refuses to give them the chance deservedly to impose their quite narrow cultural tastes.  Unspoken lies an uneasy feeling that thirty years ago, style that wears itself like a merit badge and world-weary, paternalistic maleness should have been enough to guarantee lionization.  We were groomed to rule, but somewhere along the way the kingdom shrunk from Western culture to a sub-principality of Oprah-land.  As a counterexample, consider a figure very like Wood but who writes about movies: Anthony Lane, young, prose-stylish, British, retrograde, doesn’t suffer under the weight of literature’s supposed prior dominance.  What is delusive about this bunker mentality is that this country’s most widely circulated magazines are far more likely to publish a piece by Rick Moody or Dale Peck than by Fredric Jameson or Franco Moretti.

So literature, then, or at least a particular idea of it, seems to have become a narrative of decline whose retelling celebrates one’s refinement and sensitivity, one’s belief in what is of true value, and one’s allegiance to the superiority of an imaginary time before theory, before globalization, before now.  It’s as comfortable as a wool sweater.  One can see why Flaubert excites reviewers such as Wood: here is the one writer whose famously toilsome life of writing was rewarded with immortality.  Premature obsolescence becomes posthumous greatness.  He is the human allegory of the value of art beyond and in opposition to economic value.  (Not for nothing does Bourdieu identify Flaubert as the key figure of the nineteenth-century French aesthetic field.)  Praising Flaubert’s style, his adaptation of descriptive prose into a vehicle for a deliciously ambiguous form of seeing the world, allays not Bloom’s anxiety of influence, the need to kill the poetic father, but the anxiety of inheritance, the need to see oneself as the true heir of the revered father.  It’s a telling reversal, in that a vital artistic tradition should be much more eager to dethrone than rethrone canonical forbears.  It is a form of reading Flaubert’s will, and finding one’s own name as the beneficiary of all that (cultural) capital.

All of which is a shame, because on the matter of literary style, Wood is very good.  Like Hugh Kenner before him, he has a talent for the producing something literary out of talking about literature.  And he is also illuminating on his authors, in the case of Flaubert identifying the strange contradiction between his constant satirizing of the bourgeois life and his deep immersion in it.  (It’s precarious realism, satire perched on the edge of mimesis, and you want to cheer as Flaubert keeps keeping his balance.)  But Wood stops there, as though he were the only person still having this conversation, like a jellyroll archivist.  The last critic, indeed.  But lots of people are talking about Flaubert, only in ways that are also informed by whole schools of thought that swam right past Wood.  I saw a lecture on Flaubert only two months ago by Sara Danius, the Swedish translator of Jameson, which treated many of the same issues as Wood, only it attempted to connect Flaubert’s aesthetic practice not only to a geneaology of novelists, but to his historical period itself.  D.A. Miller, the author of Jane Austen, or the Secret of Style, likewise makes the study of style into more than an anachronistic internal affair.

It’s on the relation of style to history that I think Flaubert continues to fascinate.  Sentimental Education (which is the true masterpiece, not Bovary) is the story of how a life is shaped by historical events of the grandest variety, but which can only be dimly sensed by the protagonists, absorbed as they are by the petty and familiar dramas of their own lives.  Even those characters who are politically and intellectually engaged are shown to have at best a limited purview on the conditions of their existence, while much action is taken for completely quixotic reasons that have nothing to do with their outcomes.  The novel is a tour-de-force of contingency, starting with the famous first scene, in which our hero Frederic first glimpses his great obsession, Madame Arnoux.  That Flaubert’s own life was marked by such a obsession fascinates, but Frederic’s cowardly and utterly sympathetic disappearance during the most epic moments of 1848 shapes the novel as much negatively as the pursuit of Madame Arnoux does positively.  In a novel saturated by looking at things, Flaubert is at pains to show the difficulty of seeing anything for what it is, and at many moments suggests the pointlessness of trying.  But conscripting Flaubert into playing the absent father in our own anxiety dreams about the death of literature and the marginality of writers ignores another drift of his work, not the one toward the autonomy of style, but toward seeing past the sentimental towards a world that is only ever represented but no less real for that fact.  A longed-for wholeness and a fallen world are by no means the special burden of recently disenfranchised social elites; they are, to paraphrase another nineteenth-century French novelist, illusions to be lost.

See All Dispatches

Selected Minor Works: Of the Proper Names of Peoples, Places, Fishes, &c.

Justin E. H. Smith

When I was an undergraduate in the early 1990s, an outraged student activist of Chinese descent announced to a reporter for the campus newspaper: “Look at me! Do I look ‘Oriental’? Do you see anything ‘Oriental’ about me?  No. I’m Asian.”  The problem, however, is that he didn’t look particularly ‘Asian’ either, in the sense that there is nothing about the sound one makes in uttering that word that would have some natural correspondence to the lad’s physiognomy.  Now I’m happy to call anyone whatever they want to be called, even if personally I prefer the suggestion of sunrises and sunsets in “Orient” and “Occident” to the arbitrary extension of an ancient (and Occidental) term for Anatolia all the way to the Sea of Japan.  But let us be honest: the 1990s were a dark period in the West to the extent that many who lived then were content to displace the blame for xenophobia from the beliefs of the xenophobes to the words the xenophobes happened to use.  Even Stalin saw that to purge bourgeois-sounding terms from Soviet language would be as wasteful as blowing up the railroad system built under the Tsar.

In some cases, of course, even an arbitrary sound may take on grim connotations in the course of history, and it can be a liberating thing to cast an old name off and start afresh.  I am certainly as happy as anyone to see former Dzerzhinsky Streets changed into Avenues of Liberty or Promenades of Multiparty Elections.  The project of pereimenovanie, or re-naming, was as important a cathartic in the collapsed Soviet Union as perestroika, or rebuilding, had been a few years earlier.  If the darkest period of political correctness is behind us, though, this is in part because most of us have realized that name-changes alone will not cut it, and that a real concern for social justice and equality that leaves the old bad names intact is preferable to a cosmetic alteration of language that allows entrenched injustice to go on as before– pereimenovanie without perestroika

But evidently the PC coffin could use a few more nails yet, for the naive theory of language that guided the demands of its vanguard continues to inform popular reasoning as to how we ought to go about calling things.  Often, it manifests itself in what might be called pereimenovanie from the outside, which turns Moslems into Muslims, Farsi into Persian, and Bombay into Mumbai, as a result of the mistaken belief on the part of the outsiders that they are thereby, somehow, getting it right.  This phenomenon, I want to say, involves not just misplaced moral sensitivity, but also a fundamental misunderstanding of how peoples and places come by their names. 

Let me pursue these and a few other examples in detail.  These days, you’ll be out on your ear at a conference of Western Sinologists if you say “Peking” instead of “Beijing.”  Yet every time I hear a Chinese person say the name of China’s capital city, to my ear it comes out sounding perfectly intermediate between these two.  Westerners have been struggling for centuries to come up with an adequate system of transliteration for Chinese, but there simply is no wholly verisimilar way to capture Chinese phonology in the Latin alphabet, an alphabet that was not devised with Chinese in mind, indeed that had no inkling of the work it would someday be asked to do all around the world.  As Atatürk showed with his Latinization of Turkish, and Stalin with his failed scheme for the Cyrillicization of the Baltic languages, alphabets are political as hell. But decrees from the US Library of Congress concerning transliteration of foreign alphabets are not of the same caliber as the forced adoption of the Latin or Cyrillic scripts.  Standardization of transliteration has more to do with practical questions of footnoting and cataloguing than with the politics of identity and recognition.

Another example.  In Arabic, the vowel between the “m” and the “s” in the word describing an adherent of Islam is a damma.  According to Al-Ani and Shammas’s Arabic Phonology and Script (Iman Publishing, 1999), the damma is “[a] high back rounded short vowel which is similar to the English “o” in the words “to” and “do”.” So then, “Moslem” or “Muslim”?  It seems Arabic itself gives us no answer to this question, and indeed the most authentic way to capture the spirit of the original would probably be to leave the vowel out altogether, since it is short and therefore, as is the convention of Arabic othography, unwritten.   

And another example.  Russians refer to Russia in two different ways: on the one hand, it is Rus’, which has the connotation of deep rootedness in history, glagolithic tablets and the like, and is often modified by the adjective “old”; on the other hand it is Rossiia, which has the connotation of empire and expanse, engulfing the hunter-gatherers of Kamchatka along with the Slavs at the empire’s core.  Greater Russia, as Solzhenitsyn never tires of telling us, consists in Russia proper, as well as Ukraine (the home of the original “Kievan Rus'”), and that now-independent country whose capital is Minsk.  Minsk’s dominion is called in German “Weissrussland,” and in Russian “Belorussiia.”  In other words, whether it is called “Belarus” or “Belorussia” what is meant is “White Russia,” taxonomically speaking a species of the genus “Russia.”  (Wikipedia tells us that the “-rus” in “Belarus” comes from “Ruthenia,” but what this leaves out is that “Ruth-” itself is a variation on “Rus’,” which, again is one of the names for Muscovite Russia as well as the local name for White Russia.) 

During the Soviet period, Americans happily called the place “Belorussia,” yet in the past fifteen years or so, the local variant, “Belarus,” has become de rigueur for anyone who might pretend to know about the region.  Of course, it is admirable to respect local naming practices, and symbolically preferring “Belarus” over “Belorussia” may seem a good way to show one’s pleasure at the nation’s newfound independence from Soviet domination. 

However (and here, mutatis mutandis, the same point goes for Mumbai), I have heard both Americans and Belarusans say the word “Belarus,” and I daresay that when Americans pronounce it, they are not saying the same word as the natives.  Rather, they are speaking English, just as they were when they used to say “Belorussia.”  Moreover, there are plenty of perfectly innocuous cases of inaccurate naming.  No one has demanded (not yet, anyway) that we start calling Egypt “Misr,” or Greece “Hellas.”  Yet this is what we would be obligated to do if we were to consistently employ the same logic that forces us to say “Belarus.”  Indeed, even the word we use to refer to the Germans is a borrowing from a former imperial occupier –namely, the Romans– and has nothing to do with the German’s own description of themselves as Deutsche.

In some cases, such as the recent demand that one say “Persian” instead of “Farsi,” we see an opposing tendency: rather than saying the word in some approximation of the local form, we are expected to say it in a wholly Anglicized way.  I have seen reasoned arguments from (polyglot and Western-educated) natives for the correctness and sensitivity of “Mumbai,” “Persian,” “Belarus,” and “Muslim,” but these all have struck me as rather ad hoc, and, as I’ve said, the reasoning for “Persian” was just the reverse of the reasoning for “Mumbai.”  In any case, monolingual Persian speakers and residents of Mumbai themselves could not care less. 

Perhaps the oddest example of false sensitivity of this sort comes not in connection with any modern ethnic group, but with a race of hominids that inhabited Europe prior to the arrival of the homo sapiens and were wiped out by the newcomers about 29,000 years ago.  In the 17th century, one Joachim Neumann adopted the Hellenized form of his last name, “Neander,” and proceeded to die in a valley that subsequently bore his name: the Neanderthal, or “the valley of the new man.”  A new man, of sorts, was found in that very valley two centuries later, to wit, the Homo neanderthalensis.

Now, as it so happens, “Thal” is the archaic version of the German word “Tal.”  Up until the very recent spelling reforms imposed at the federal level in Germany, vestigial “h”s from earlier days were tolerated in words, such as “Neanderthal,” that had an established record of use.  If the Schreibreform had been slightly more severe, we would have been forced to start writing “Göte” instead of the more familiar “Goethe.”  But Johann Wolfgang was a property the Bundesrepublik knew it dare not touch. The “h” in “Neanderthal” was however axed, but the spelling reform was conducted precisely to make German writing match up with German speech: there never was a “th” sound in German, as there is in English, and so the change from “Thal” to “Tal” makes no phonetic difference. 

We have many proper names in North America that retain the archaic spelling “Thal”, such as “Morgenthal” (valley of the morning), “Rosenthal” (valley of the roses), etc., and we happily pronounce the “th” in these words as we do our own English “thaw.”  Yet, somehow over the past ten years or so Americans have got it into their heads that they absolutely must say Neander-TAL, sans voiceless interdental fricative, as though this new standard of correctness had anything to do with knowledge of prehistoric European hominids, as though the Neanderthals themselves had a vested interest in the matter.  I’ve even been reproached myself, by a haughty, know-it-all twelve-year-old, no less, for refusing to drop the “th”. 

The Neanderthals, I should not have to point out, were illiterate, and the presence or absence of an “h” in the word for “valley” in a language that would not exist until several thousand years after their extinction was a matter of utter indifference to them.  Yet doesn’t the case of the Neanderthal serve as a vivid reductio ad absurdum of the naive belief that we can set things right with the Other if only we can get the name for them, in our own language, right?  The names foreigners use for any group of people (or prehuman hominids, for that matter) can only ever be a matter of indifference for that group itself, and it is nothing less than magical thinking to believe that if we just get the name right we can somehow tap into that group’s essence and refer to them not by some arbitrary string of phonemes, but as they really are in their deepest and truest essence. 

This magical thinking informs the scriptural tradition of thinking about animals, according to which the prelapsarian Adam named all the different biological kinds not with arbitrary sounds, but in keeping with their true natures.  Hence, the task of many European naturalists prior to the 18th century was to rediscover this uncorrupted knowledge of nature by recovering the lost language of Adam, and thus, oddly enough, zoology and Semitic philology cosnstituted two different domains of the same general project of inquiry. 

Some very insightful thinkers, such as Gottfried Leibniz, noticed that ancient Hebrew too, just like modern German, is riddled with corrupt verb forms and senseless exceptions to rules, and sharply inferred from this that Hebrew was no more divine than any vulgate.  Every vocabulary human beings have ever come up with to refer to the world around them has been nothing more than an arbitrary, exception-ridden, haphazard set of sounds, and in any case the way meanings are produced seems to have much more to do with syntax –the rules governing the order in which the sounds are put together– than with semantics– the correspondence between the sounds and the things in the world they are supposed to pick out. 

This hypercorrectness, then, is ultimately not just political, but metaphysical as well.  It betrays a belief in essences, and in the power of language to pick these out.  As John Dupré has compellingly argued, science educators often end up defending a supercilious sort of taxonomical correctness when they declaim that whales are not fish, in spite of the centuries of usage of the word “fish” to refer, among other things, to milk-producing fish such as whales.  The next thing you know, smart-ass 12-year-olds are lecturing their parents about the ignorance of those who think whales are fish, and another generation of blunt-minded realists begins its takeover.  Such realism betrays too much faith in the ability of authorities –whether marine biologists, or the oddly prissy postmodern language police in the English departments– to pick out essences by their true names.  It is doubtful that this faith ever did much to protect anyone’s feelings, while it is certain that it has done much to weaken our descriptive powers, and to take the joy out of language. 

Negotiations 7: Channeling Britney

(Note: Jane Renaud wrote a great piece on this subject last week. I hope the following can add to the conversation she initiated.)

When I first heard of Daniel Edwards’ Britney sculpture (Monument to Pro-Life), I was fascinated. What a rich stew: a pop star whose stock-in-trade has been to play the innocent/slut (with rather more emphasis on the latter) gets sculpted by a male artist as a pro-life icon and displayed in a Williamsburg gallery! Gimmicky, to be sure; nonetheless, the overlapping currents of Sensationalism, Irony and Politics were irresistible, so I took myself out to the Capla Kesting Fine Art Gallery on Thursday to have a look.

I am not a fan of pop culture. My attitude toward it might best be characterized a Swiss. In conversation, I tend to sniff at it. “Well,” I have been known to say, “it may be popular, but it’s not culture.” I do admit to a lingering fondness for Britney, but that has lees to do with her abilities as chanteuse than it does with the fact that, as a sixteen-year-old boy, I moved from the WASPy northeast to Nashville, Tennessee and found myself studying in a seraglio of golden-haired, pig-tailed, Catholic schoolgirls, each one of them a replica of early Britney and each one of them, like her, as common and as unattainable as a species of bird. What can I say? I was sixteen. Despise the sin, not the sinner.

I was curious to know the extent to which this sculpture would be a monument to pop culture—did the artist, Daniel Edwards, fancy himself the next Jeff Koons?—and surprised to discover that, having satisfied my puerile urges (a surreptitious glance at the breasts, a disguised study of the money shot), my experience of the piece was in no way mediated by my awareness that its model was a pop star. “Britney Spears” is not present in the piece, and its precursor is not Koons’ Michael Jackson and Bubbles or Warhol’s silk-screens of Marilyn Monroe. One has to go much further back than that. Its precursor is actually Michelangelo’s Pietá.

In both cases, the spectacular back story (Mary with dead Christ on her lap, Britney with Sean’s head in her cooch) is overwhelmed by the temporal event that grounds it; so that the Pietá is nothing more (nor less) than Mother and Dead Son, and Monument to Pro-Life becomes simply Woman Giving Birth. Where Koons and Warhol empty the role of the artist as creative genius and replace it with artist as mirror to consumer society, Edwards (and Michelangelo well before him) empty the divine (the divinity of Christ, the divinity of the star) and replace it with the human. Edwards, then, is doing something very tricky here, and if one can stomach the nausea-inducing gimmickry of the work, there’s a lot worth considering.

First of all is the composition of the work. The subject is on all fours, in a position that, as Jane Renaud wryly observed in these pages last week, might be more appropriate for getting pregnant than for giving birth. She is on a bear-skin rug; her eyes are heavily lidded, her lips slightly parted, as though she might be about to moan or to sing. And yet the sculpture is in no way pornographic or even titillating. There is nothing on her face to suggest either pain or ecstasy. The person seems to be elsewhere, even if her body is present, and the agony we associate with childbirth is elsewhere. In fact, with her fingers laid gently into the ears of the bear, not clutching or tearing at them, she seems to be channeling all her emotions into its head. Its eyes are wide open, its mouth agape and roaring. The subject is emptying herself, channeling at both ends, serenely so, a Buddha giving birth, without tension at the front end and without blood or tearing at the rear. The child’s head emerges as cleanly, and as improbably, as a perfect sphere from a perfect diamond. This is a revolution in birthing. Is that the reward for being pro-life? Which brings us to the conceptual component of Monument to Pro-Life.

To one side of the sculpture stands a display of pro-life literature. You cannot touch it; you cannot pick it up; you cannot read it even if you wanted to because it is in a case, under glass. This is not, I think, because there is not enough pro-life literature to go around, and it hints at the possibility that the artist is being deliberately disingenuous, that he is commenting both on the pro-life movement and on its monumental aspirations. The sculpture is out there in the air, naked and exposed, while the precious literature is encased and protected. Shouldn’t it be the other way around? It’s almost as if the artist is saying, “This is the pro-life movement’s relationship to women: It is self-interested and self-preserving; and in its glassed-in, easy righteousness it turns them into nothing more than vessels, emptying machines. It prefers monuments to mothers, literature to life.”

Now lest you think that I am calling Daniel Edwards the next Michelangelo, let me assure you that I most definitely am not. As conceptually compelling as I found Monument to Pro-Life to be, I also found it aesthetically repugnant. Opinions are like assholes—everybody has one—but this sculpture is hideous to look at. It’s made of fiberglass, for god’s sake, which gives it a reddish, resiny cast, as though the subject had been poached, and a texture which made me feel, just by looking at it, that I had splinters under my fingernails. I know we all live in a post-Danto age of art criticism, that ideas are everything now, and that the only criterion for judging a work of art is its success in embodying its own ideas; but as I left the gallery I couldn’t help thinking of Plato and Diogenes. When Plato defined man as a “featherless biped,” the Cynic philosopher is said to have flung a plucked chicken into the classroom, crying “Here is Plato’s man.” Well, here is Danto’s art. With a price tag of $70,000, which it will surely fetch, he can have it.

Monday Musing: The Palm Pilot and the Human Brain, Part II

Part II: How Brains Might Work

Chess7Two weeks ago I wrote the first part of this column in which I made an attempt to explain how it is that we are able to design very complex machines like computers: we do it by employing a hierarchy of concepts, each layer of which builds upon the layer below it, ultimately allowing computers to perform seemingly miraculous tasks like beating Gary Kasparov at chess at the highest levels of the hierarchy, while all the way down at the lowest layers, the only thing going on is that some electrons are moving about on a tiny wafer of silicon according to simple physical rules. [Photo shows Kasparov in Game 2 of the match.] I also tried to explain what gives computers their programmable flexibility. (Did you know, for example, that Deep Blue, the computer which drove Kasparov to hair-pulling frustration and humiliation in chess, now takes reservations for United Airlines?)

But while there is a difference between understanding something that we ourselves have built (we know what the conceptual layers are because we designed them, one at a time, after all) and trying to understand something like the human brain, designed not by humans but by natural selection, there is also a similarity: brains also do seemingly miraculous things, like the writing of symphonies and sonnets, at the highest levels, while near the bottom we just have a bunch of neurons connected together, digitally firing (action potentials) away, again, according to fairly simple physical rules. (Neuron firings are digital because they either fire or they don’t–like a 0 or a 1–there is no such thing as half of a firing or a quarter of one.) And like computers, brains are also very flexible at the highest levels: though they were not designed by natural selection specifically to do so, they can learn to do long-division, drive cars, read the National Enquirer, write cookbooks, and even build and operate computers, in addition to a million other things. They can even turn “you” off, as if you were a battery operated toy, if they feel they are not getting enough oxygen, thereby making you collapse to the ground so that gravity can help feed them more of the oxygen-rich blood that they crave (you know this well, if you have ever fainted).

Jeff_hawkinsTo understand how brains do all this, this time we must attempt to impose a conceptual framework on them from the outside, as it were; a kind of reverse-engineering. This is what neuroscience attempts to do, and as I promised last time, today I would like to present a recent and interesting attempt to construct just such a scaffolding of theory on which we might stand while trying to peer inside the brain. This particular model of how the brain works is due to Jeff Hawkins, the inventor of the Palm Pilot and the Treo Smartphone, and a well-respected neuroscientist. It was presented by him in detail in his excellent book On Intelligence, which I highly recommend. What follows here is really just a very simplified account of the book.

Let’s jump right into it then: Hawkins calls his model the “Memory-Prediction” framework, and its core idea is summed up by him in the following four sentences:

The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. (On Intelligence, p. 6)

Hawkins focuses mainly on the neocortex, which is the part of the brain responsible for most higher level functions such as vision, hearing, mathematics, music, and language. The neocortex is so densely packed with neurons, that no one is exactly sure how many there are, though some neuroscientists estimate the number at about thirty billion. What is astonishing is to realize that:

Those thirty billions cells are you. They contain almost all your memories, knowledge, skills, and accumulated life experience… The warmth of a summer day and the dreams we have for a better world are somehow the creation of these cells… There is nothing else, no magic, no special sauce, only neurons and a dance of information… We need to understand what these thirty billion cells do and how they do it. Fortunately, the cortex is not just an amorphous blob of cells. We can take a deeper look at its structure for ideas about how it gives rise to the human mind. (Ibid., p. 43)

The neocortex is a thin sheet consisting of six layers which envelops the rest of the brain and is folded up in a crumpled way. This is what gives the brain its walnutty appearance. (If completely unfolded, it would be quite thin–only a couple of millimeters–and would cover an area about the size of a large dinner napkin.) Now, while the neocortex looks pretty much the same everywhere with its six layers, different regions of it are functionally specialized. For example, the Broca’s area handles the rules of linguistic grammar. Other areas of the neocortex have also been mapped out functionally in quite some detail by techniques such as looking at brains with localized damage (due to stroke or injury) and seeing what functions are lost in the patient. (Antonio Damasio presents many fascinating cases in his groundbreaking book Descartes’ Error.) But while everyone else was looking for differences in the various functional areas of the cortex, a very interesting observation was made by a neurophysiologist named Vernon Mountcastle (I was fortunate enough to attend a brilliant series of lectures by him on basic physiology while I was an undergraduate!) at Johns Hopkins University in 1978: he noticed that all the different regions of the neocortex look pretty much exactly the same, and have the same structure, whether they process language or handle touch. And he proposed that since they have the same structure, maybe they are all performing the same basic operation, and that maybe the neocortex uses the same computational tool to do everything. Mountcastle suggested that the only difference in the various areas are how they are connected to each other and to other parts of the nervous system. Now Hawkins says:

Scientists and engineers have for the most part been ignorant of, or have chosen to ignore, Mountcastle’s proposal. When they try to understand vision or make a computer that can “see,” they devise vocabulary and techniques specific to vision. They talk about edges, textures, and three-dimensional representations. If they want to understand spoken language, they build algorithms based on rules of grammar, syntax, and semantics. But if Mountcastle is correct, these approaches are not how the brain solves these problems, and are therefore likely to fail. If Mountcastle is correct, the algorithm of the cortex must be expressed independently of any particular function or sense. The brain uses the same process to see as to hear. The cortex does something universal that can be applied to any type of sensory or motor system. (Ibid., p. 51)

The rest of Hawkins’s project now becomes laying out in detail what this universal algorithm of the cortex is, how it functions in different functional areas, and how the brain implements it. First he tells us that the inputs to various areas of the brain are essentially similar and consist basically of spatial and temporal patterns. For example, the visual cortex receives a bundle of inputs from the optic nerve, which is connected to the retina in your eye. These inputs in raw form represent the image that is being projected onto the retina in terms of a spatial pattern of light frequencies and amplitudes, and how this image (pattern) is changing over time. Similarly the auditory nerves carry input from the ear in terms of a spatial pattern of sound frequencies and amplitudes which also varies with time, to the auditory areas of the cortex. The main point is that in the brain, input from different senses is treated the same way: as a spatio-temporal pattern. And it is upon these patterns that the cortical algorithm goes to work. This is why spoken and written language are perceived in a remarkably similar way, even though they are presented to us completely differently in simple sensory terms. (You almost hear the words “simple sensory terms” as you read them, don’t you?)

Now we get to one of Hawkins’s key ideas: unlike a computer (whether sequential or parallel), the brain does not compute solutions to problems; it retrieves them from memory: “The entire cortex is a memory system. It isn’t a computer at all.” (Ibid., p. 68) To illustrate what he means by this, Hawkins provides an example: imagine, he says, catching a ball thrown at you. If a computer were to try to do this, it would attempt to estimate its initial trajectory and speed and then use some equations to calculate its path, how long it will take to reach you, etc. This is not anything like what your brain does. So how does your brain do it?

When a ball is thrown, three things happen. First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted as it is recalled to accomodate the particulars of the moment, such as the ball’s actual path and the position of your body. The memory of how to catch a ball was not programmed into your brain; it was learned over years of repetitive practice, and it is stored, not calculated, in your neurons. (Ibid., p. 69)

At first blush it may seem that Hawkins is getting away with some kind of sleight of hand here. What does he mean that the memories are just retrieved and adjusted for the particulars of the situation? Wouldn’t that mean that you would need millions of memories for every single scenario like catching a ball, because every situation of ball-catching can vary from another in a million little ways? Well, no. Hawkins now introduces a way of getting around this problem, and it is called invariant representation, which we will get to soon. Cortical memories are different from computer memory in four ways, Hawkins tells us:

  1. The neocortex stores sequences of patterns.
  2. The neocortex recalls patterns auto-associatively.
  3. The neocortex stores patterns in an invariant form.
  4. The neocortex stores patterns in a hierarchy.

Let’s go through these one at a time. The first feature is why when you are telling a story about something that happened to you, you must go in sequence (and why often people include boring details in their stories!) or you may not remember what happened; like only being able to remember a song if you sing it to yourself in sequence, one note at a time. (You couldn’t recite the notes backward–or even the alphabet backward very fast–while a computer could.) Even very low-level sensory memories work this way: the feel of velvet as you run your hand over it is just the pattern of very quick sequential nerve firings that occurs as your fingers run over the fibers. This pattern is a different sequence in case you are running your hand over gravel, say, and that is how you recognize it. Computers can be made to store memories sequentially, such as a song, but they do not do this automatically, the way the cortex does.

Auto-associativity is the second feature of cortical memory and what it means is that patterns are associated with themselves. This makes it possible to retrieve a whole pattern when only a part of it is presented to the system.

…imagine you see a person waiting for a bus but can only see part of her because she is standing partially behind a bush. Your brain is not confused. Your eyes only see parts of a body, but your brain fills in the rest, creating a perception of a whole person that’s so strong you may not even realize you’re only inferring. (Ibid., p. 74)

Temporal patterns are also similarly retrieved and completed. In a noisy environment we often don’t hear every single word that someone is saying to us, but our brain fills in with what it expects to have heard. (If Robin calls me on Sunday night on his terrible cell phone and says, “Did you …crackle-pop… your Monday column yet?” My brain will automatically fill in the word “write.”) Sequences of memory patterns recalled auto-associatively essentially constitute thought.

Now we get to invariant representations, the third feature of cortical memory. Notice that while computer memories are designed for 100% fidelity (every bit of every byte is reproduced flawlessly), our brains do not store information this way. Instead, they abstract out important relationships in the world and store those, leaving out most of the details. Imagine talking to a friend who is sitting right in front of you. As you talk to her, the exact pattern of pixels coming over the optic nerve from your retina to your visual cortex is never the same from one moment to another. In fact, if you sat there for hours, no pattern would ever repeat because both of you are moving slightly, the light is changing, etc. Nevertheless you have a continuous sense of your friend’s face being in front of you. How does that happen? Because your brain’s internal pattern of representation of your friend’s face does not change, even though the raw sensory information coming in over the optic nerve is always changing. That’s invariant representation. And it is implemented in the brain using a hierarchy of processing. Just to give a taste of what that means, every time your friend’s face or your eyes move, a new pattern comes over the optic nerve. In the visual input area of your cortex, called V1, the pattern of activity is also different each time anything in your visual field moves, but several levels up in the hierarchy of the visual system, in your facial recognition area, there are neurons which remain active as long as your friend’s face is in your visual field, at any angle, in any light, and no matter what makeup she’s wearing. And this type of invariant representation is not limited to the visual system but is a property of every sensory and cortical system. So how is this invariant representation accomplished?

———————–

I’m sorry, but unfortunately, I have once again run out of time and space and must continue this column next time. Despite my attempts at presenting Hawkins’s theory as concisely as possible, it is not possible to condense it further without losing essential parts of it and there’s still quite a bit left, and so I must (reluctantly) write a Part III to this column in which I will present Hawkins’s account of how invariant representations are implemented, how memories are used to make predictions (the essence of intelligence), and how all this is implemented in hierarchical layers in the actual cortex of the brain. Look for it on May 8th. Happy Monday, and have a good week!

NOTE: Part III is here. My other Monday Musing columns can be found here.

Sunday, April 16, 2006

Medicine and Race

Also in the Economist, medicine factors in race:

LAST month researchers from the University of Texas and the University of Mississippi Medical Centre published a paper in the New England Journal of Medicine. They had studied three versions (or alleles, as they are known) of a gene called PCSK9. This gene helps clear the blood of low-density lipoprotein (LDL), one of the chemical packages used to transport cholesterol around the body. Raised levels of LDL are associated with heart disease. The effect of all three types of PCSK9 studied by Jonathan Cohen and his colleagues was to lower the LDL in a person’s bloodstream by between 15% and 28%, and coronary heart disease by between 47% and 88%, compared with people with more common alleles of the gene.

Such studies happen all the time and are normally unremarkable. But this was part of a growing trend to study individuals from different racial groups and to analyse the data separately for each group. The researchers asked the people who took part in the study which race they thought they belonged to and this extra information allowed them to uncover more detail about the risk that PCSK9 poses to everyone.

Yet race and biology are uncomfortable bedfellows. Any suggestion of systematic biological differences between groups of people from different parts of the world—beyond the superficially obvious ones of skin colour and anatomy—is almost certain to raise hackles.

How Women Spur Economic Growth

In the Economist:

[I]t is misleading to talk of women’s “entry” into the workforce. Besides formal employment, women have always worked in the home, looking after children, cleaning or cooking, but because this is unpaid, it is not counted in the official statistics. To some extent, the increase in female paid employment has meant fewer hours of unpaid housework. However, the value of housework has fallen by much less than the time spent on it, because of the increased productivity afforded by dishwashers, washing machines and so forth. Paid nannies and cleaners employed by working women now also do some work that used to belong in the non-market economy.

Nevertheless, most working women are still responsible for the bulk of chores in their homes. In developed economies, women produce just under 40% of official GDP. But if the worth of housework is added (valuing the hours worked at the average wage rates of a home help or a nanny) then women probably produce slightly more than half of total output.

The increase in female employment has also accounted for a big chunk of global growth in recent decades. GDP growth can come from three sources: employing more people; using more capital per worker; or an increase in the productivity of labour and capital due to new technology, say. Since 1970 women have filled two new jobs for every one taken by a man. Back-of-the-envelope calculations suggest that the employment of extra women has not only added more to GDP than new jobs for men but has also chipped in more than either capital investment or increased productivity. Carve up the world’s economic growth a different way and another surprising conclusion emerges: over the past decade or so, the increased employment of women in developed economies has contributed much more to global growth than China has.

A Close Look at Terror and Liberalism

Via Crooked Timber, The Couscous Kid over at Aaronovitch Watch has an extensive review of Paul Berman’s Terror and Liberalism (in 1, 2, 3, 4, 5, 6, 7 posts).

Tracing Berman’s arguments back to his sources isn’t always easy. There’s a “Note to the Reader” at the end that lists a few of the works consulted, but Berman habitually cites books without providing page references, and that irritates. (Terror and Liberalism doesn’t have an index, either, and that also irritates.) Sometimes you don’t need to chase up his references to find fault with the book. He calls Franz Ferdinand the “grand duke of Serbia” on p.32, for example, and he’s become the “Archduke of Serbia” by p.40, when he wasn’t either; Franz Ferdinand was the Archduke of Austria, and Serbia lay outside the Habsburg lands. (Funny, though, that the errors in basic general knowledge should come to light when it comes to dealing with Serbia and Sarajevo, of all places.) But much of the rest of the time, it’s an interesting exercise to compare what Berman says with what his sources say. I haven’t done this comprehensively in what follows (even I’ve got better things to do with my time), and I’m not saying anything in what follows about the two chapters on Sayyid Qutb because I haven’t read any of his works and don’t know much about him, apart from what Berman tells me, and, as will be clear from what follows, I don’t think Berman’s an entirely reliable source. But I have done a bit of checking around with some of the books that I’ve got to hand. How does Berman use his sources? Often carelessly, and not especially fair-mindedly, as we shall see.

battle in the brain

22881518

In debates over creationist doctrines, evolutionary biologists often are hard-pressed to explain how nature could make something as intricate as the human brain. Even Alfred Wallace, the 19th century biologist who discovered natural selection with Charles Darwin, could not accept that such a flexible organ of learning and thought could emerge by trial and error.

No two brains are exactly alike, despite their overall anatomical similarity. Each brain changes throughout a lifetime, altered by experience and aging. Even the simplest mental activities, such as watching a moving dot, can involve slightly different areas in different people’s brains, studies show.

Underlying every personal difference in thought, attitude and ability is an astonishing variety of brain cells, scientists have discovered.

more from the LA Times here.

the apology of peter beinart

The neoconservatives now pretty much argue that they’re the new anti-totalitarian liberals. They more or less accepted the principles of the New Deal in the ’50s and ’60s, and largely feel that they’ve carried on the tradition of liberal interventionism. What I’d like to know from you is this: what part of Schlesinger, Truman, and Scoop Jackson’s lunch have the neocons not eaten?

That’s an important purpose of the book, to argue against that idea, and I would say a couple of things. The first is that the recognition of American fallibility is a very critical element of the liberal tradition, very central to Niebuhr’s thinking, which then became an important element in the Truman administration. That idea manifests itself internationally in a sympathy for international institutions, a belief that while it’s possible that the United States can be a force for good—indeed, that America must be a force for good in the world, which is certainly what neocons believe—that America can also be a force for evil. That since America can be corrupted by unrestrained power, America should take certain steps to limit its power and to express it through international institutions. That, I think, is the first element of the liberal tradition that has been lost in neocon thinking.

The second element that’s been lost, I think, is the recognition that America’s ability to be a force for good in the world rests on the economic security of average Americans. The early neocons had a certain sympathy for the labor movement, and the labor movement was a very important part of Cold War liberalism, because the ability of the United States to be generous around the world really depended on the government’s willingness to take responsibility for the economic security of its own people. Of course, that would have to mean something different today than it did in the 1950s. But widespread economic security remains a very important basis upon which the United States can act in the world, because it maintains the support of the American people for that action. I think that has been lost in neocon thinking since they adopted the—as I see it— quite radical economic ideology of the American Right.

more from the interview at the Atlantic Unlimited here.

stage left

Odets

On April 17th, to mark the centennial of the birth of the playwright Clifford Odets, Lincoln Center Theatre will open a new production of “Awake and Sing!,” Odets’s first full-length play and the one that made him a literary superstar in 1935, at the age of twenty-eight. In the years that followed, this magazine dubbed Odets “Revolution’s No. 1 Boy”; Time put his face on its cover; Cole Porter rhymed his name in song (twice); and Walter Winchell coined the word “Bravodets!” “Of all people, you Clifford Odets are the nearest to understand or feel this American reality,” his friend the director Harold Clurman wrote in 1938, urging him “to write, write, write—because we need it so much.” “You are the Man,” Clurman told him.

more from The New Yorker here.

goytisolo

16goyt1190

On a blazing blue afternoon last winter, I met the Spanish expatriate novelist Juan Goytisolo at an outdoor cafe in Marrakesh. It was easy to spot the 75-year-old writer, sitting beneath an Arabic-language poster of himself taped to the cafe window. He was reading El País, the Spanish newspaper to which he has contributed for decades. Olive-skinned, with a hawk nose and startlingly pale blue eyes, he had wrapped himself against the winter chill in a pullover, suede jacket, checked overcoat and two pairs of socks.

Considered by many to be Spain’s greatest living writer, Goytisolo is in some ways an anachronistic figure in today’s cultural landscape. His ideas can seem deeply unfashionable. For him, writing is a political act, and it is the West, not the Islamic world, that is waging a crusade. He is a homosexual who finds gay identity politics unappealing and who lived for 40 years with a French woman he considers his only love. “I don’t like ghettos,” he informed me. “For me, sexuality is something fluid. I am against all we’s.” The words most commonly used to describe his writing are “transgressive,” “subversive,” “iconoclastic.”

more from the NY Times magazine here.