leonora carrington

Corbis_elements31

A few months ago, I found myself next to a Mexican woman at a dinner party. I told her that my father’s cousin, whom I’d never met and knew little about, was an artist in Mexico City. “I don’t expect you’ve heard of her, though,” I said. “Her name is Leonora Carrington.” The woman was taken aback. “Heard of her? My goodness, everyone in Mexico has heard of her. Leonora Carrington! She’s hugely famous. How can she be your cousin, and yet you know nothing about her?”

How indeed? At home, I looked her up, and found myself plunged into a world of mysterious and magical paintings. Dark canvases dominated by a large, sinister-looking house; strange and slightly menacing women, mostly tall and wearing big cloaks; ethereal figures, often captured in the process of changing from one form to another; faces within bodies; long, spindly fingers; horses, dogs and birds.

more from The Guardian here.

If modernism had a pope, it was Picasso.

PABLO PICASSO’S spell over 20th-century art can perhaps be summed up in five words spoken by the Armenian-American painter Arshile Gorky in 1934. Informed that Picasso had recently started making messier paintings, the very tidy Gorky famously replied, “If Picasso drips, I drip.”

Picasso’s staggering output — more than 20,000 paintings, drawings, prints, sculptures, and photographs — gave him an exposure unprecedented for a living artist. The fact that he spearheaded the century’s most important movement (cubism), invented its defining technique (collage), and painted its most imposing masterpiece (“Guernica”) makes it hard to think of any modern artist — including rivals and elders — who didn’t at some point in his career take cues from Picasso’s Paris studio. If modernism had a pope, it was Picasso.

more from Boston Globe Ideas here.

GOT OPTIMISM? THE WORLD’S LEADING THINKERS SEE GOOD NEWS AHEAD

From Edge:

Interrogate1501 While conventional wisdom tells us that things are bad and getting worse, scientists and the science-minded among us see good news in the coming years. That’s the bottom line of an outburst of high-powered optimism gathered from the world-class scientists and thinkers who frequent the pages of Edge, in an ongoing conversation among third culture thinkers (i.e., those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.)

The 2007 Edge Question marks the 10th anniversary of Edge, which began in December, 1996 as an email to about fifty people. In 2006, Edge had more than five million individual user sessions.

I am pleased to present the 2007 Edge Question:

What Are You Optimistic About? Why?

The 160 responses to this year’s Edge Question span topics such as string theory, intelligence, population growth, cancer, climate and much much more. Contributing their optimistic visions are a who’s who of interesting and important world-class thinkers.

Got optimism? Welcome to the conversation!

More here.

Gene doubles breast cancer risk

From BBC News:Gene_2

Women with a damaged copy of the gene called PALB2 have twice the risk of breast cancer, the Institute of Cancer Research scientists found. They estimate that faulty PALB2 causes about 100 cases of breast cancer in the UK each year. Two damaged copies of the gene also appears to cause a serious blood disorder in children, they report in Nature Genetics. It is PALB2’s job to repair mutant DNA, so people who have a faulty copy of the gene are more likely to accumulate other genetic damage too, leading to problems like cancer.

Professor Nazneen Rahman and her team studied the DNA of 923 women with breast cancer and a family history of the disease, not caused by the known breast cancer genes BRCA1 or BRCA2. Ten of the breast cancer patients had a damaged copy of PALB2, as against none of 1,084 healthy women used as a comparison. Carrying a faulty version of PALB2 more than doubled a woman’s risk of developing breast cancer – taking her lifetime risk from one in nine to about one in five.

More here.

A Case of the Mondays: the Year of Dashed Hopes

I presume that at the end of each year, pundits, writers, and bloggers gather to discuss the year’s political trends. Most of what they discuss is invariably pulled out of thin air, but I hope I’m basing my own analyses on enough evidence to escape that general description. It’s accurate to characterize 2004 as the year of liberal democratic hopes: the Orange Revolution in Ukraine, the new parliamentary elections in Georgia consolidating 2003’s Rose Revolution, the calls for democratic revolution in Iran. This continued into early 2005 with Lebanon and the scheduled elections in Palestine.

And then it all crashed. New Ukraine was plagued by corruption. The Tulip Revolution didn’t go anywhere. Frustration with the slow pace of reform in Iran catapulted Ahmadinejad to power instead of ushering in a new democratic system. Fatah looked weak on corruption, weak on Israel, and weak on public order, while Hamas looked like a fresh change.

In the Middle East, 2006 was the year of dashed hopes, even more so than 2005. Iraq was irrevocably wrecked long before 2006 started, but 2006 was the year the violence escalated. Most wars kill many more people than any subsequent occupations; in Iraq, there were more people killed in 2006 than in 2003. The Sunni-Shi’a rift had been there for fifteen years, but intensified over the course of last year, and spilled over to other countries in the region: Iran, Saudi Arabia, Lebanon. Throughout most of the year, there was only escalating violence and increasing legitimization of Muqtada Al-Sadr, but right at the end, the execution of Saddam was probably carried out by Al-Sadr’s followers, rather than by the government.

The single country in the region whose hopes were dashed the most was of course Lebanon. The Cedar Revolution was supposed to usher in a new age of democracy built along the same pillarized model that had worked in the Netherlands for about a century. Hezbollah was supposed to reform itself from a terrorist organization to a legitimate if fundamentalist political party. And the country was supposed to become independent of Syrian and Iranian influence. To a large extent due to Israel’s lack of knowledge of foreign policy responses that don’t involve military force, those hopes disintegrated in the summer of 2006.

In Palestine, Hamas won the parliamentary election, which Israel considered equivalent to a writ permitting the IDF to kidnap elected Palestinian officials at will. As had happened in Nicaragua in the early 1980s, the Hamas government found itself stripped of development aid, and became increasingly radicalized as a result. Israel responded the only way it is familiar with, i.e. with military force, and killed 655 Palestinian civilians in the Occupied Territories, up from 190 the previous year.

And in the US and Iran, two conservative Presidents with a vested interest in muzzling liberal democratic opposition escalated their saber-rattling game. In Iran, that meant crackdowns on opposition media, especially in the wake of Israel and Hezbollah’s war. Although toward the end of the year, reformists gained power in the election, real power in Iran lies in the hands of unelected Supreme Leader Khamenei, who is as opposed to democratic reforms as Ahmadinejad.

At the same time, 2006 was the year of recognition. In Iraq, the situation became so hopeless it became impossible to pretend everything was going smoothly. Right now the only developed country where the people support the occupation of Iraq is Israel, where indiscriminately killing Arab civilians is seen as a positive thing. The Iranian people did the best they could to weaken the regimes within the parameters of the law. Hamas’s failure to deliver on its promise to make things better led to deep disillusionment among the Palestinians, which did not express itself in switching support to even more radical organizations. And most positively, the Lebanese people, including plenty of Shi’as, came to see Hezbollah not as a populist organization that would liberate them from the bombs of Israel, but as a cynical militia that played with their lives for no good reason.

Elsewhere, there were no clear regional trends. However, the political events of 2006 in the United States might point to a national trend of increased liberalism. On many issues the trend is simply a continuation or culmination of events dating at least fifteen years back, but on some, especially economic and foreign policy ones, the shift was new. In 2002 and 2004, the American people voted for more war; in 2006 they voted for less. While they didn’t elect enough Senate Democrats to withdraw from Iraq, they did express utter disapproval of the country’s actions in Iraq. This trend originated in the Haditha massacre of 2005, and Bush’s approval rating crashed in 2005 rather than in 2006, but it was in 2006 that the general discontent with the direction of American politics was expressed in a decisive vote for a politically weak party over Bush’s party.

So after the hope of 2004 and early 2005, 2006 was not just the year when violence rebounded and democracy retreated in the Middle East, but also the year when public unrest with the status quo grew. This unrest did not manifest itself in any movement with real political power, and I don’t want to be too naively optimistic to predict that it will. I mentioned that the Iranians did everything within the parameters of the law to support democratic reforms; but Iran’s system is so hopelessly rigged that nothing within the parameters of the law can change anything. Still, indirect action typically sets the stage for direct action; Martin Luther King’s civil rights movement stood on the shoulders of decades of NAACP and ACLU litigation.

The cliché way to end this would be to look at the situation in Iran and to a lesser extent Lebanon and Palestine, and posit that the country is now at a crossroads. I don’t think it is; the Iranian people have had the infrastructure and social institutions to overthrow theocracy for a number of years now, and came closest to doing so in 2002, before the US invasion of Iraq. It may be that the Iranian people have grown so tired of the regime that even “We hate America and Israel more than our opponents” isn’t enough to hold Khamenei and Ahmadinejad afloat. Or it may be that Israel will decide to save the regime by launching military strikes against its nuclear weapons program. And it may be that after either of these scenarios, there will be a political reversal the next year modeled on a color/flower revolution or on a reaction against such a revolution. Hopes can be dashed, and dashed hopes can be rescued, as 2006 taught us.

A Mole at the World Bank? Truth-Telling between the Lines

The Financial Times headline read “Rapidly Swelling Middle Class is Key to World Bank’s Global Optimism.” Nonsense, I thought. The World Bank is up to its old tricks again, making a silk purse out of a sow’s ear.

It seemed just another case of how to lie with statistics. Stress the doubling of the middle class in the world’s population by 2030, and bury the fact that 84% of the world’s population won’t be middle class. In fact, the overwhelming majority of humanity will be far from it. In 2030, half a billion persons will still be living on a dollar or less a day, and 1.8 billion will still be living on less than 2 dollars a day, according to the Bank. Against the Bank’s estimate that 1.2 billion persons in poor countries will be middle class, the claim of a “rapidly swelling middle class,” admittedly the Financial Times’ characterization of the Bank’s Global Economic Prospects 2007, issued on December 13, rings a bit hollow. The poor we shall still have with us, and in abundance, it appears, if we wait for the world’s new middle class to bring about a more just society.

The news gets worse. Economic growth through 2030 will only slightly close the income gap between rich and poor countries. For every positive economic stride taken by poor countries, for every climb on the income ladder made by this new middle class in poor countries, rich countries and most importantly their rich citizens, take two steps forward. By 2030, the rich will have increased their proportion of the world’s income from 58% in 2000 to 69%. Thus far, massive industrialization in poor countries has not really shifted the economic balance of power. “Five decades of development have done little to bring the average incomes of developing countries closer to those of OECD (rich) countries,” says the Bank, practically in an outburst of unusual clarity.

Moreover, inequality inside poor countries is likely to worsen. This has been happening in rich countries since the seventies. In poor countries, the new middle class will be pulling away from everyone else, working class and poor alike.

Still another cloud noted by the Bank itself casts a shadow over its sunny optimism. Between now and 2030, the world economic growth rate will stagnate at 3%, a point lower than the period between 2004 and 2006, and significantly lower than the 4.5% rate that created the rich country middle class between 1960 and 1980. Developing countries will grow at a 4% rate, a point above the world rate, but the increment must outrace any population growth while motoring an economic catch-up rate to rich countries much greater than heretofore seen. The remarkable rise of China, as well as its remarkable size, disguises the probable fates of other poor countries that will not grow at China’s astronomical rates. Their improvements will be in increments too small to pull up incomes generally.

Just a case of the glass half-empty, my interpretation, versus the half-full glass interpretation presented by the World Bank? Perhaps, there is no gainsaying anything else. In the long run, as Keynes said, we will all be dead and thus won’t know anyway.

But something else has crept into the World Bank’s analysis. As an inveterate reader of their voluminous annual reports and annual outlooks for the past decade and more, I think I have read in this latest Economic Outlook 2007 something new. It seems that the Bank is ready to argue, albeit buried on page 83 of the new report that rising income inequality gets in the way of eliminating poverty. In code, they put it this way: “Rising inequality is worrisome because there is an inverse relationship between inequality and poverty reduction.” In other words, as economic inequality increases in a society, fewer people escape poverty. The Bank’s model of the relationship between inequality and poverty reduction shows that in 40 poor countries with higher income inequality, poverty reduction stagnates or declines, while in 40 other poor countries, poverty does decline with rising inequality. Where poverty declines, however, the increments are small. The Bank concludes:
“”In addition to a contemporaneous reduction n poverty that may be expected from lowering inequality, policies that promote a more equal distribution of income are likely to enable the economy to realize greater poverty reduction from future growth.” (85)

What happened to the trickle-down theory? Isn’t economic growth enough? Is something more than a safety net now deemed necessary? The new middle class won’t bring greater well-being to poor countries? As Claude Rains said to Bogie: “ I am shocked! Shocked!”

More shocking still: Can the World Bank be suggesting income redistribution? Are they admitting that equality is better for economies and people as well than simply economic growth and an outpouring of free enterprise?

Imagine: There is a mole in World Bank – and on Wolfowitz’s, how do you say, “watch.” Someone has smuggled a bit of truth into the Bank. Equality is better: it does a better job of eliminating poverty and improves the prospects for greater economic growth. According this time to the World Bank.

Faced with these facts, you might now think that the relationship between poverty and economic inequality is almost intuitive. But then again, if you are an American reader, a European reader, or that new middle class person in a poor country, you probably don’t wonder how your fate (no less than mine) is tied inextricably with the fate of all the poor people who surround you in your everyday life. Isn’t it unlikely that all of them are down on their luck, and you perchance are not?

In my next column two weeks from now, I will talk about the redistribution that the World Bank hints at could occur, and to the advantage of poor people around the world. I will also talk about why people in rich countries find it difficult to accept the virtues of doing something to bring about worldwide economic equality.

Selected Minor Works: Where Movies Came From

Justin E. H. Smith

In The World Viewed, Stanley Cavell wryly comments that it was not until he reached adulthood that he learned “where movies come from.” As it happens, movies come from the same place I do: California. Now as an answer to the question of origins, this is hardly satisfying. “California,” as a one-word answer to anything, has the air of a joke about it, whereas we at least aim for earnestness. This is a problem that has vexed many who have left California and attempted to make sense of it at a distance. The turn-of-the-century Harvard philosopher Josiah Royce once declared of his home state that “there is no philosophy in California.” Yet the state’s generative power, and my attachment to it, have left me with the sense that something of philosophical interest is waiting to be said, by me if I’m lucky, if not in it, then at least about it and its exports.

My sense is that these two questions, the autobiographical and the film-historical, may be treated together. This is not because I was born into a Hollywood dynasty –far from it– but because throughout most of my life, memories were something shared, something public, something manufactured. By this I mean that, instead of memories, we had movies, and instead of conversation, we mimicked dialogue. I use the past tense here, as in the title (though there in acknowledgment also of a debt to Joan Didion), because it is already clear that movies will not be the dominant art form of the twenty-first century, and if we agree with Cavell that a movie is a sequence of automated world projections, then movies are no longer being made.

Gretagarboclarkgable

A contingent development in the history of technology left us with an art form thought by many to reveal something very significant about what we as humans are. Cavell chose to express this significance in the Heideggerian terms of film’s ‘world-disclosing power’ (did Heidegger ever even see a movie?). Already before 1920, Royce’s Harvard colleague Hugo Münsterberg had argued that the ‘photoplay’ serves as a powerful proof of Fichtean idealism: what need is there for Kant’s thing-in-itself if a ‘world’ can exist just as well projected on a screen as embodied in three dimensions?

I take it for granted that the world disclosed to us today is the same world to which human beings have had access for roughly the past hundred thousand years, that is, since we became anatomically, and thus we may presume cognitively, modern. For this reason, what interests me most about movies is the question: what is it that our experience of them replaced? We have only had them for a hundred and some odd years, not long enough for our brains to have evolved from some pre-cinematic condition into something that may be said to have an a priori grasp of what a movie is, in the same way that we now know that human brains come into the world with the concept of, for example, ‘animate being’. We are not naturally movie-viewing creatures, though it certainly feels natural, as though it were just what we’ve always done. What then is it that we’ve always done, of which movie-viewing is just the latest transformation? What is that more fundamental category of activity of which movie-viewing is a variety?

One well-known answer is that watching movies is an activity much like dreaming. This is evidenced by the numerous euphemisms we use for the motion picture industry. In his recent book, The Power of Movies: How Mind and Screen Interact, the analytic philosopher Colin McGinn explicitly maintains that the mind processes cinematic stories in a way that is similar to its processing of dreams. He even suggests that movies are ‘better’ than dreams to the extent that they are ‘dreams rendered into art’.

But what then are dreams? To begin with, dreams are a reminder that every story we come up with to account for who we are and how we got to be that way is utterly and laughably false. Everything I tell myself, every comforting phrase so useful in waking life, breaks down and becomes a lie. For eight hours a day, it is true that I have killed someone and feel infinite remorse, that my teeth have fallen out, that I am able to fly but ashamed to let anyone know, that the airplanes I am in make slow motion, 360-degree loops, that my hair is neck-length and won’t grow any longer. None of these things is true. Yet, some mornings, for a few seconds after awakening, I grasp that they are truer than true. And then they fade, and the ordinary sense of true and false settles back in.

The images that accompany these feelings –the feeling of shame at levitating, the feeling of being in a doomed airplane—are relatively unimportant. They are afterimages, congealed out of the feelings that make the dreams what they are. As Aristotle already understood, and explained in his short treatise On Dreams, “in every case an appearance presents itself, but what appears does not in every case seem real… [D]ifferent men are subject to illusions, each according to the different emotion present in him.” Perhaps because of this feature of dreams –that they are not about the things that are seen, but rather the things that are seen are accompaniments for feelings– dreams have always been interpreted symbolically. This has been the case whether the interpreter believes that dreams foretell the future, or in contrast that they help to make sense of how the past shaped the present. Psychoanalysis has brought us around, moreover, to the idea that retrodiction is no more simple a task than oneiromancy, and that indeed the two are not so different: once you unravel the deep truth of the distant past, still echoed in dreams even if our social identities have succeeded in masking it, then by that very insight, and by it alone, you become master of your own future.

It seems to me that we don’t have an adequate way of talking about dreams. The topic is highly tabooed, and anyone who recounts his dreams to others, save for those who are most intimate, is seen as flighty and mystical. Of course, the consequence of this taboo is not that dreams are not discussed, but only that they are discussed imprecisely. For the most part, we are able to explain what happened, but not what the point-of-view of the dreamer was. This is overlooked, I suspect, because it is taken for granted that the point-of-view of the dreamer is that of a movie viewer. What people generally offer when prompted to recount a dream is a sort of plot summary: this happened, then this, then this. Naturally, the plot never makes any sense at all, and so the summary leaves one with the impression that what we are dealing with is a particularly strange film.

Certainly, there is a connection between some films –especially the ‘weird’ ones– and dreams, but only because the filmmakers have consciously, and in my view always unsuccessfully, set about capturing the feeling of a dream. From Un chien andalou to Eraserhead, weird things happen indeed, but the spectator remains a spectator, outside of the world projected onto the screen, looking into it. We are made to believe that our dreams are ‘like’ movies, but lacking plots, and then whenever an ‘experimental’ filmmaker attempts to go without plot, as if on cue audiences and critics announce that the film is like a dream. Middle-brow, post-literate fare such as Darren Aronofsky’s tedious self-indulgences have further reduced the dreamlike effect supposedly conveyed by non-linear cinema to an echo of that adolescent ‘whoah’ some of us remember feeling at the Pink Floyd laser-light show down at the planetarium.

Dreams are not weird movies, even if we recognize the conventions of dreamlikeness in weird movies. Weird movies, for one thing, are watched. The dreamer, in contrast, could not be more in the world dreamt. It is the dreamer’s world. It is not a show.

However problematic the term, cinematic ‘realism’ shows us, moreover, that movies can exhibit different degrees of dreamlikeness, and thus surely that there is something wrong with the generalized movie-dream analogy. In dream sequences, we see bright colors and mist, and, as was explicitly noted by a dwarf in Living in Oblivion, we often see dwarves. When the dream sequence is over, the freaks disappear, the lighting returns to normal, and in some early color films, most notably The Wizard of Oz, we return to black-and-white, the cinematic signifier of ‘reality’. My dreams are neither like the dream sequences in movies, nor are they like the movies that contain the dream sequences. Neither Kansas nor Oz, nor limited to dwarves in the repertoire of curious sights they offer up.

A much more promising approach is to hold, with Cavell, that movies are mythological, that their characters are types rather than individuals, and that the way we experience them is probably much more like the way folk experience their tales. Movies are more like bedtime stories than dreams: like what we cognize right before going to sleep than the mash that is made of our waking cognitions after we fall asleep.

If anything on the screen resembles dreams, it is cartoons (and thus Cavell is right to insist that these are in need of a very different sort of analysis than automated world projections). Cartoons are for the most part animistic. It is difficult to imagine a dream sequence in a Warner Brothers cartoon, since there were to begin with no regular laws of nature that might be reversed, there was no reality that might be suspended. For most of the early history of cartoons, there were no humans, but only ‘animate’ beings, such as cats and mice, as well as trees, the sun, and clouds, often given a perfunctory face just to clue us into their ontological status.

The increasing cartoonishness of movies –both the increasing reliance on computer graphics, as well as the decreasing interest in anything resembling human beings depicted in anything resembling human situations (see, e.g., Pierce Brosnan-era James Bond for a particularly extreme example of the collapse of the film/cartoon boundary)—may be cause for concern. Mythology, and its engagement with recognizably human concerns about life and death, is, it would seem, quickly being replaced by sequences of pleasing colors and amusing sounds.

Teletubbieshp43212

I do not mean to come across as a fogey. Unlike Adorno with his jazz problem (which is inseparable from his California problem: the state that made him regret that the Enlightenment ever took place), I am a big fan of some of the animistic infantilism I have seen on digital screens recently. Shrek and the Teletubbies are fine entertainments. I am simply noting, already for a second time, that the era of movies is waning, and that nothing has stepped in, for the moment, to do what they once did.

A video-game designer recently told me that ‘gaming’ is just waiting for its own Cahiers du Cinéma, and that when these come along, and games are treated with adequate theoretical sophistication not by fans but by thinkers, then these will be in a position to move into the void left by film. I have no principled reasons to be saddened by this, but they will have to do a good deal more than I’ve seen them doing so far. Now I have not played a video game since the days when Atari jackets were sincerely, and not ironically, sought after. But I did see some Nintendo Wii consoles on display in a mall in California when I was home for the holidays this past week. The best argument for what the crowding mall urchins were doing with those machines is the same one, and the only one, that we have been able to come up with since Pong, and the one I certainly deployed when pleading with my own parents for another few minutes in front of the screen: it seems to do something for developing motor skills. This makes video games the descendants of sporting and hunting, while what movies moved in to replace were the narrative folk arts, such as the preliterate recitations that would later be recorded as Homer’s Odyssey. These are two very different pedigrees indeed, and it seems unlikely to me that the one might ever be the successor to the other.

Dreams are the processing of emotional experiences had in life, experiences of such things as hunting, or fighting, or love. Narrative arts, such as movies, are the communal processing, during waking life, of these same experiences. Movies are not like dreams, and video games are not like movies. And as for what experiences are, and why all the authentic ones seem to have already been had by the time we arrive at an age that enables us to reflect on them (seem all to have happened in California), I will leave that question to a better philosopher, and a less nostalgic one.

**

For an extensive archive of Justin Smith’s writing, please visit his archive at www.jehsmith.com.