Marx at 193

421px-Karl_Marx_001Also in the LRB, John Lanchester:

In trying to think what Marx would have made of the world today, we have to begin by stressing that he was not an empiricist. He didn’t think that you could gain access to the truth by gleaning bits of data from experience, ‘data points’ as scientists call them, and then assembling a picture of reality from the fragments you’ve accumulated. Since this is what most of us think we’re doing most of the time it marks a fundamental break between Marx and what we call common sense, a notion that was greatly disliked by Marx, who saw it as the way a particular political and class order turns its construction of reality into an apparently neutral set of ideas which are then taken as givens of the natural order. Empiricism, because it takes its evidence from the existing order of things, is inherently prone to accepting as realities things that are merely evidence of underlying biases and ideological pressures. Empiricism, for Marx, will always confirm the status quo. He would have particularly disliked the modern tendency to argue from ‘facts’, as if those facts were neutral chunks of reality, free of the watermarks of history and interpretation and ideological bias and of the circumstances of their own production.

I, on the other hand, am an empiricist. That’s not so much because I think Marx was wrong about the distorting effect of underlying ideological pressures; it’s because I don’t think it’s possible to have a vantage point free of those pressures, so you have a duty to do the best with what you can see, and especially not to shirk from looking at data which are uncomfortable and/or contradictory. But this is a profound difference between Marx and my way of talking about Marx, which he would have regarded as being philosophically and politically entirely invalid.

In the Zeitgeist, Fact-Checking

Essay_factcheck_iontrap1-383x287Via Zite, there are two pieces on fact-checking this week. Atossa Araxia Abrahamian in The New Inquiry:

Brides magazine has a fact-checker. She does things like verify the cost of honeymoons and makes sure that Vera Wang did, in fact, design that dress, and compares the captions on winter flower bouquet slideshows with pictures in botany reference books. It would be terrible to mistake a eucalyptus pod for a mere pussy willow.

Many American magazines, from trashy celebrity weeklies to highbrow general-interest journals, have fact-checkers of some sort. I worked as one in 2008, when, with three other Harper’s interns, I fact-checked the magazine’s Index from beginning to end. Being the primary speaker of foreign languages in the intern cubicle, I ended up doing a lot of the international checking for the magazine. Percentage of Russians who say one goal of U.S. foreign policy is “the complete destruction of Russia”: 43. Number of Iraqi stray dogs that Operation Baghdad Pups has helped emigrate to the United States since 2003: 66.

I quickly learned that fact-checking is a predominantly American phenomenon. The French don’t do much of it, most Russian papers certainly don’t either, and even the Swiss — possibly the most exacting and precise people on the planet — do not make use of fact-checkers in quite the same way as Americans do. Yet their presses keep rolling, and their readers keep reading, and their brides still buy roses, if by another name. People even trust the press in Switzerland much more than they do in the U.S.: 46 percent of Swiss people said they had confidence in their newspapers and magazines in 2010. Among Americans, it was only 25 percent.

Christian Lorentzen also has a piece in the LRB.

Women: The Libyan Rebellion’s Secret Weapon

From Smithsonian:

Inas Fathy’s transformation into a secret agent for the rebels began weeks before the first shots were fired in the Libyan uprising that erupted in February 2011. Inspired by the revolution in neighboring Tunisia, she clandestinely distributed anti-Qaddafi leaflets in Souq al-Juma, a working-class neighborhood of Tripoli. Then her resistance to the regime escalated. “I wanted to see that dog, Qaddafi, go down in defeat.” A 26-year-old freelance computer engineer, Fathy took heart from the missiles that fell almost daily on Col. Muammar el-Qaddafi’s strongholds in Tripoli beginning March 19. Army barracks, TV stations, communications towers and Qaddafi’s residential compound were pulverized by NATO bombs. Her house soon became a collection point for the Libyan version of meals-ready-to-eat, cooked by neighborhood women for fighters in both the western mountains and the city of Misrata. Kitchens across the neighborhood were requisitioned to prepare a nutritious provision, made from barley flour and vegetables, that could withstand high temperatures without spoiling. “You just add water and oil and eat it,” Fathy told me. “We made about 6,000 pounds of it.”

Fathy’s house, located atop a hill, was surrounded by public buildings that Qaddafi’s forces often used. She took photographs from her roof and persuaded a friend who worked for an information-technology company to provide detailed maps of the area; on those maps, Fathy indicated buildings where she had observed concentrations of military vehicles, weapons depots and troops. She dispatched the maps by courier to rebels based in Tunisia. On a sultry July evening, the first night of Ramadan, Qaddafi’s security forces came for her. They had been watching her, it turned out, for months. “This is the one who was on the roof,” one of them said, before dragging her into a car. The abductors shoved her into a dingy basement at the home of a military intelligence officer, where they scrolled through the numbers and messages on her cellphone. Her tormentors slapped and punched her, and threatened to rape her. “How many rats are working with you?” demanded the boss, who, like Fathy, was a member of the Warfalla tribe, Libya’s largest. He seemed to regard the fact that she was working against Qaddafi as a personal affront.

More here.

How to Write Like a Scientist

From Science:

ScientistScetch081005_originalI didn’t know whether to take my Ph.D. adviser’s remark as a compliment. “You don’t write like a scientist,” he said, handing me back the progress report for a grant that I had written for him. In my dream world, tears would have come to his eyes, and he would have squealed, “You write like a poet!” In reality, though, he just frowned. He had meant it as a criticism. I don’t write like a scientist, and apparently that’s bad. I asked for an example, and he pointed to a sentence on the first page. “See that word?” he said. “Right there. That is not science.”

The word was “lone,” as in “PvPlm is the lone plasmepsin in the food vacuole of Plasmodium vivax.” It was a filthy word. A non-scientific word. A flowery word, a lyrical word, a word worthy of — ugh — an MFA student. I hadn’t meant the word to be poetic. I had just used the word “only” five or six times, and I didn’t want to use it again. But in his mind, “lone” must have conjured images of PvPlm perched on a cliff’s edge, staring into the empty chasm, weeping gently for its aspartic protease companions. Oh, the good times they shared. Afternoons spent cleaving scissile bonds. Lazy mornings decomposing foreign proteins into their constituent amino acids at a nice, acidic pH. Alas, lone plasmepsin, those days are gone. So I changed the word to “only.” And it hurt. Not because “lone” was some beautiful turn of phrase but because of the lesson I had learned: Any word beyond the expected set — even a word as tame and innocuous as “lone” — apparently doesn’t belong in science. I’m still fairly new at this science thing. I’m less than 4 years beyond the dark days of grad school and the adviser who wouldn’t tolerate “lone.” So forgive my naïveté when I ask: Why the hell not? Why can’t we write like other people write? Why can’t we tell our science in interesting, dynamic stories? Why must we write dryly? (Or, to rephrase that last sentence in the passive voice, as seems to be the scientific fashion, why must dryness be written by us?)

More here.

collecting America’s other language

Dare

The scene is a mysterious one, beguiling, thrilling, and, if you didn’t know better, perhaps even a bit menacing. According to the time-enhanced version of the story, it opens on an afternoon in the late fall of 1965, when without warning, a number of identical dark-green vans suddenly appear and sweep out from a parking lot in downtown Madison, Wisconsin. One by one they drive swiftly out onto the city streets. At first they huddle together as a convoy. It takes them only a scant few minutes to reach the outskirts—Madison in the sixties was not very big, a bureaucratic and academic omnium-gatherum of a Midwestern city about half the size of today. There is then a brief halt, some cursory consultation of maps, and the cars begin to part ways. All of this first group of cars head off to the south. As they part, the riders wave their farewells, whereupon each member of this curious small squadron officially commences his long outbound adventure—toward a clutch of carefully selected small towns, some of them hundreds and even thousands of miles away. These first few cars are bound to cities situated in the more obscure corners of Florida, Oklahoma, and Alabama. Other cars that would follow later then went off to yet more cities and towns scattered evenly across every corner of every mainland state in America. The scene as the cars leave Madison is dreamy and tinted with romance, especially seen at the remove of nearly fifty years. Certainly nothing about it would seem to have anything remotely to do with the thankless drudgery of lexicography. But it had everything to do with the business, not of illicit love, interstate crime, or the secret movement of monies, but of dictionary making.

more from Simon Winchester at Lapham’s Quarterly here.

How should I live my life?

Contraception2

So what does it mean for the country that our cultural common denominator is shrinking? That increasingly Americans have very little experiences through which to understand the lives of our fellow citizens? And why, in the midst of these trends is there general agreement on an issue as potentially flammable as contraception? Recently I found good answers to these questions in an unexpected place — in an essay on literature and ethics that provides a convincing account of the rock bottom consequences of a fractured population. The essay is called “Perceptive Equilibrium: Literary Theory and Ethical Theory,” and it was first given as a talk by the philosopher Martha Nussbaum 25 years ago. The purpose of the paper was to merge literary theory with ethical theory — to show how forms of art like the novel can help us answer arguably the two most fundamental philosophical questions: How should I live my life? How should we live together? Here is Nussbaum describing the centrality of literature to ethics: “One of the things that makes literature something deeper and more central for us than a complex game, deeper even than those games, for example chess and tennis, that move us to wonder by their complex beauty, is that it speaks like Strether. It speaks about us, about our lives and choices and emotions, about our social existence and the totality of our connections.”

more from Kevin Hartnett at The Millions here.

proust’s mom

Proust13

There are texts that seem to require a certain craziness of us, a mismeasure of response to match the extravagance of their expression. But can a mismeasure be a match? All we know is that we don’t want to lose or reduce the extravagance but can’t quite fall for it either. An example would be Walter Benjamin’s wonderful remark about missed experiences in Proust: None of us has time to live the true dramas of the life that we are destined for. This is what ages us – this and nothing else. The wrinkles and creases on our faces are the registration of the great passions, vices, insights that called on us; but we, the masters, were not at home. Even without the ‘nothing else’ this is a pretty hyperbolic proposition. With the ‘nothing else’ it turns into a form of madness, a suggestion that we shall not grow old at all unless we keep failing to receive the passions, vices and insights that come to see us. This would be a life governed by new necessities, entirely free from the old ones, exempt from time and biology. The sentences are clear enough but don’t read easily as fantasy or figure of speech. Benjamin is asking us to entertain this magical thought for as long as we can, and not to replace it too swiftly by something more sensible.

more from Michael Wood at the LRB here.

Tuesday, March 27, 2012

In the Land of Blood and Honey

In_the_Land_of_Blood_and_Honey_5Srecko Horvat on Angelina Jolie's new film In the Land of Blood and Honey about an affair between a Serb and Muslim during the Balkan war, in Eurozine (Warning: the article contains spoilers):

The movie tells the story of Danijel, a soldier fighting for the Bosnian Serbs, and Ajla, a Bosnian Muslim who was involved with him before the war and is now a captive in the concentration camp he oversees. It's a bad repetition of the same good old story depicted most recently in The Reader (Stephen Daldry, 2008), and unforgettably in The Night Porter (Liliana Cavani, 1974). In short, it's a story about the perpetrator and the victim and a reversal of these perspectives as the story goes on. On the one hand you have a war criminal (a concentration camp guard in The Reader, the former SS officer in The Night Porter, the Serbian officer in Jolie's movie), on the other hand you have the victim (the boy who read to the concentration camp guard, the concentration camp survivor, the innocent Muslim woman in the Bosnian war). What all three films have in common is a fatal love affair between a criminal and an innocent victim, the only difference being that, in The Reader, the boy finds out eight years later, when as a law student, he observes a trial of several women (including his former lover) accused of letting 300 Jewish women die in a burning church.

Common to all these films is also that the roles become less and less clear as the story develops. The best example is The Night Porter, where thirteen years after the concentration camp, Lucia meets Maximillian again, who is now working at a Vienna hotel; instead of exposing him, she falls back into their sadomasochistic relationship. The relationship is what Primo Levi – remembering the case of the Sonderkommando, the “special units” of camp inmates in charge of bringing their neighbours to the gas chambers – calls the “gray zone”, the zone in which the “long chain of conjunction between victim and executioner” comes loose. Or, as Giorgo Agamben puts it in his Remnants of Auschwitz, “where the oppressed becomes oppressor and the executioner in turn appears as victim. A gray, incessant alchemy in which good and evil and, along with them, all the metals of traditional ethics reach their point of fusion”.[2]

The best expression of this new terra ethica was articulated by Michael in Bernhard Schlink's novel The Reader, on which the film was based: “I wanted simultaneously to understand Hanna's crime and to condemn it. But it was too terrible for that. When I tried to understand it, I had the feeling I was failing to condemn it as it must be condemned. When I condemned it as it must be condemned, there was no room for understanding. But even as I wanted to understand Hanna, failing to understand her meant betraying her all over again. I could not resolve this. I wanted to pose myself both tasks – understanding and condemnation. But it was impossible to do both.”[3] In other words, when we try to understand the crime, then we stop condemning it; and when we condemn, then we stop understanding it.

So, what is missing in Jolie's movie?

A Short Course in Thinking About Thinking

DKahnemannDaniel Kahneman in Edge:

SESSION ONE

I'll start with a topic that is called an inside-outside view of the planning fallacy. And it starts with a personal story, which is a true story.

Well over 30 years ago I was in Israel, already working on judgment and decision making, and the idea came up to write a curriculum to teach judgment and decision making in high schools without mathematics. I put together a group of people that included some experienced teachers and some assistants, as well as the Dean of the School of Education at the time, who was a curriculum expert. We worked on writing the textbook as a group for about a year, and it was going pretty well—we had written a couple of chapters, we had given a couple of sample lessons. There was a great sense that we were making progress. We used to meet every Friday afternoon, and one day we had been talking about how to elicit information from groups and how to think about the future, and so I said, Let's see howwe think about the future.

I asked everybody to write down on a slip of paper his or her estimate of the date on which we would hand the draft of the book over to the Ministry of Education. That by itself by the way was something that we had learned: you don't want to start by discussing something, you want to start by eliciting as many different opinions as possible, which you then you pool. So everybody did that, and we were really quite narrowly centered around two years; the range of estimates that people had—including myself and the Dean of the School of Education—was between 18 months and two and a half years.

But then something else occurred to me, and I asked the Dean of Education of the school whether he could think of other groups similar to our group that had been involved in developing a curriculum where no curriculum had existed before. At that period—I think it was the early 70s—there was a lot of activity in the biology curriculum, and in mathematics, and so he said, yes, he could think of quite a few. I asked him whether he knew specifically about these groups and he said there were quite a few of them about which he knew a lot. So I asked him to imagine them, thinking back to when they were at about the same state of progress we had reached, after which I asked the obvious question—how long did it take them to finish?

An Interview with Margarethe von Trotta on Her Upcoming Film About Hannah Arendt

8898401-STANDARDOver at the Goethe Institute:

Thinking and writing, those are the things that really defined the great philosopher Hannah Arendt. The objective of the film was to transform this thought into a film, to make it a visual embodiment of a real person.

How does one use film to describe a woman who thinks? How can we watch her while she thinks? That is of course the big challenge when making a film about intellectual personalities. I insisted that Barbara Sukowa play Hannah because she is the only actress I know who I could imagine showing me how someone thinks, or that someone is thinking. And she managed to do it. For me, it was clear from the beginning that she was the one, and I had to push for her to get the role because some of the investors couldn’t visualize it. I said to them, “I am not doing this film without her.” I had the same situation with Rosa Luxemburg and again with Hildegard von Bingen – she really experienced the intellectual nature of Rosa’s political speeches, for example. That is how it is with Hannah Arendt. The viewer has to see that she is really thinking. She does two speeches in this film as well. Arendt was a professor at various universities in the United States and she did seminars and speeches on philosophical and political subject matter. In situations like that, it’s not about just reading your lines. You have to be able to improvise and develop the speech as you go. In the film there is a six-minute speech in English, with the strong German accent that Arendt had, and Sukowa is able to get viewers to experience, think and follow her analyses.

What were the preparations for the film like? And what about your contact with Arendt’s world?

Before we started writing the screenplay we met with a lot of people in New York who had known Arendt well on a personal level. People like Lotte Köhler, her longtime colleague and friend who died in 2011 at the age of 92, or Elisabeth Young-Bruehl, who also died in 2011, as well as others like Lore Jonas, widow of Hans Jonas, and Jerome Kohn, her last assistant and publisher of her posthumous writings. Those were amazing encounters, the stuff you need when you are writing a script about this type of real person who you’ve never met yourself.

Hilton Kramer, 1928-2012

Kramer1-articleInlineWilliam Grimes in the NYT:

Admired for his intellectual range and feared for his imperious judgments, Mr. Kramer emerged as a critic in the early 1950s and joined The Times in 1965, a time when the tenets of high modernism were being questioned and increasingly attacked. He was a passionate defender of high art against the claims of popular culture and saw himself not simply as a critic offering informed opinion on this or that artist, but also as a warrior upholding the values that made civilized life worthwhile.

This stance became more marked as political art and its advocates came to the fore, igniting the culture wars of the early 1980s, a struggle in which Mr. Kramer took a leading role as the editor of The New Criterion, where he was also a frequent contributor.

In its pages, Mr. Kramer took dead aim at a long list of targets: creeping populism at leading art museums; the incursion of politics into artistic production and curatorial decision making; the fecklessness, as he saw it, of the National Endowment for the Arts; and the decline of intellectual standards in the culture at large.

A resolute high modernist, he was out of sympathy with many of the aesthetic waves that came after the great achievements of the New York School, notably Pop (“a very great disaster”), conceptual art (“scrapbook art”) and postmodernism (“modernism with a sneer, a giggle, modernism without any animating faith in the nobility and pertinence of its cultural mandate”).

At the same time, he made it his mission to bring underappreciated artists to public attention and open up the history of 20th-century American art to include figures like Milton Avery and Arthur Dove, about whom he wrote with insight and affection.

the fate of the western

Winchester_73

However much certain optimists may talk about the survival or possible resurrection of the Western, I fear—much to my regret—that, as a genre, it is pretty well dead and buried, a relic of a more credulous, more innocent, more emotional age, an age less crushed or suffocated by the ghastly plague of political correctness. Nonetheless, whenever a new Western comes out, I dutifully go and see it, albeit with little expectation that it will be any good. In the last decade, I can recall three pointless remakes, vastly inferior to the movies on which they were modelled and which weren’t exactly masterpieces themselves: 3:10 to Yuma by James Mangold, The Alamo by John Lee Hancock, and True Grit by the Coen brothers, all of them uninspired and unconvincing, and far less inspired than the distinctly uneven originals made, respectively, by Delmer Davies, John Wayne, and Henry Hathaway. I recall, too, Andrew Dominik’s interesting but dull The Assassination of Jesse James by the Coward Robert Ford, Ed Harris’s bland, soulless Appaloosa, David von Ancken’s unbearable Seraphim Falls, and the Australian John Hillcoat’s The Proposition, of which my memory has retained not a single image. The only recent Westerns that have managed to arouse my enthusiasm have been those made for TV: Walter Hill’s Broken Trail, and Deadwood, whose third and final season no one has even bothered to bring out on DVD in Spain, which gives you some idea of how unsuccessful the magnificent first two series must have been. In my view, Kevin Costner’s Open Range, which came out slightly earlier, was the last decent Western to be made for the big screen, even though it has long been fashionable to denigrate anything this admirable actor and director does.

more from Javier Marías at Threepenny Review here.

whoever we may be, we are aliens too

Image

Vincent Gallo is one of the most disliked of current film actors, while George Clooney is one of the most admired, but most viewers of Essential Killing—American, Belgian, Sri Lankan, or Japanese—probably have more in common with Gallo’s “Mohammed” than they have with Clooney. Anyone can be targeted, victimized, have their eardrums blasted out, be forced to hide and kill in order to survive. All these are possibilities of human existence that, at the advanced stage of civilization we enjoy, are available to everyone. But to be George Clooney? He may make it look easy. It’s in the voice, however, that the deceptive quality of the Clooney figure can best be detected. Clooney, who is from Lexington, Kentucky, speaks with an unmarked accent, an accent of zero. His vocal deadpan (so soothing in Wes Anderson’s Fantastic Mr. Fox [2009]) projects a reasonableness and an authority that do not impose themselves through any apparent violence. When he talks, it’s as if he were saying nothing. Such a talent makes him indeed The American.

more from Chris Fujiwara at n+1 here.

first act is final curtain

Ec9fabe6-7489-11e1-9951-00144feab49a

It’s impossible to know how Francesca Woodman’s photographs would strike us if she hadn’t thrown herself out of a window at 22. Her suicide makes every image feel portentous. Each is a memento mori, a harbinger of imminent death. She specialised in self-portraits and the suite of choreographed scenes she shot with a timer or a remote trigger seems in retrospect a record of her unravelling. We rarely see her face. She bleeds into the background in very long exposures and disappears into crumbling walls. Her limbs vanish behind wallpaper and blur into architecture. Her flesh is barely solid, melting into mist and yielding to the rigid surface of a windowpane. The new exhibition of her work at New York’s Guggenheim Museum prompts a series of unanswerable questions. Would Woodman’s fierce self-scrutiny have ebbed with maturity or would it have inflected her entire career? Did the monomaniacal intensity of her work propel her towards death?

more from Ariella Budick at the FT here.

More than Health Insurance

From The New Yorker:

Health-care-supreme-court-protestOn Monday, the case of the century got even bigger. In challenges to the Affordable Care Act in lower courts, several judges gave the Supreme Court an escape hatch. These judges, including Brett Kavanaugh, a young judge sure to make Republican short lists for the Supreme Court, said that the Justices should kick the can down the road and put off a decision for a year or two. Specifically, Kavanaugh said that the Tax Anti-Injunction Act (a deeply obscure law) compelled the Justices to put off a decision on the law until it takes full effect, in 2014.

Across the ideological spectrum, the Justices, through their questions to the lawyers arguing for and against the upholding the A.C.A., declined the invitation for delay. They all (that is, the eight who asked questions; Clarence Thomas did not) seemed to recognize that there were legal and prudential reasons to resolve this issue now. As Justice Ruth Bader Ginsburg said, the act “does not apply to penalties that are designed to induce compliance with the law, rather than to raise revenue. And this is not a revenue-raising measure because, if it’s successful, they—nobody will pay the penalty, and there will be no revenue to raise.” The Court, it now seems clear, will decide this case on the merits.

More here.

At Bottom of Pacific, Director Sees Dark Frontier

From The New York Times:

CamNo sea monsters. No strange life. No fish. Just amphipods — tiny shrimplike creatures swimming across a featureless plane of ooze that stretched off into the primal darkness. “It was very lunar, a very desolate place,” James Cameron, the movie director, said in a news conference on Monday after completing the first human dive in 52 years to the ocean’s deepest spot, nearly seven miles down in the western Pacific. “We’d all like to think there are giant squid and sea monsters down there,” he said, adding that such creatures still might be found. But on this dive he saw “nothing larger than about an inch across” — just the shrimplike creatures, which are ubiquitous scavengers of the deep.

His dive, which had been delayed by rough seas for about two weeks, did not go entirely as planned: his submersible’s robot arm failed to operate properly, and his time at the bottom was curtailed from a planned six hours to about three. It was not entirely clear why. But he did emerge safely from the perilous trip, vowing to press on. The area he wants to explore, he said, was 50 times larger than the Grand Canyon. “I see this as the beginning,” Mr. Cameron said. “It’s not a one-time deal and then you move on. It’s the beginning of opening up this frontier.” National Geographic, which helped sponsor the expedition to the area known as the Challenger Deep, said that Mr. Cameron, the maker of the movies “Avatar” and “Titanic,” began his dive on Sunday at 3:15 p.m. Eastern Daylight Time, landed on the bottom at 5:52 p.m. and surfaced at 10 p.m. He conducted the news conference via satellite as he was being rushed to Guam in the hope of reaching London for the debut on Tuesday of “Titanic 3-D.”

More here.

Monday, March 26, 2012

Sunday, March 25, 2012

The Originality of the Species

Atoms-artwork-008Ian McEwan in The Guardian (via Mark Trodden):

[T]he modern artefact bears the stamp of personality. The work is the signature. The individual truly possesses his or her own work, has rights in it, defines himself by it. It is private property that cannot be trespassed on. A great body of law has grown up around this possessiveness. Countries that do not sign up to the Berne Convention and other international agreements relating to intellectual property rights find themselves excluded from the mainstream of a globalised culture. The artist owns his work, and sits glowering over it, like a broody hen on her eggs. We see the intensity of this fusion of originality and individuality whenever a plagiarism scandal erupts. (I’ve had some experience of it myself.)

The dust-jacket photograph, though barely relevant to an appreciation of a novel, seals the ownership. This is me, it says, and what you have in your hands is mine. Or is me. We see it too in the cult of personality that surrounds the artist – individuality and personality are driven to inspire near-religious devotion. The coach parties at Grasmere, the cult of Hemingway, or Picasso, or Neruda. These are big figures – their lives fascinate us sometimes even more than their art.

This fascination is relatively new. In their day, Shakespeare, Bach, Mozart, even Beethoven were not worshipped, they did not gleam in the social rankings the way their patrons did, or in the way that Byron or Chopin would do, or in the way a Nobel Prize-winner does today. How the humble artist was promoted to the role of secular priest is a large and contentious subject, a sub-chapter in the long discussion about individuality and modernity. The possible causes make a familiar list – capitalism, a growing leisured class, the Protestant faith, the Romantic movement, new technologies of communication, the elaboration of patent law following the Industrial Revolution. Some or all of these have brought us to the point at which the identification of the individual and her creativity is now complete and automatic and unquestionable. The novelist today who signs her name in her book for a reader, and the reader who stands in line waiting for his book to be signed collude in this marriage of selfhood and art.

There is an antithetical notion of artistic creation, and though it has been expressed in different forms by artists, critics and theoreticians, it has never taken hold outside the academies. This view holds that, of course, no one escapes history. Something cannot come out of nothing, and even a genius is bound by the constraints and opportunities of circumstance. The artist is merely the instrument on which history and culture play. Whether an artist works within his tradition or against it, he remains its helpless product. The title of Auden’s essay, “The Dyer’s Hand”, is just a mild expression of the drift. Techniques and conventions developed by predecessors – perspective, say, or free indirect style (the third person narrative coloured by a character’s subjective state) are available as ready-made tools and have a profound effect. Above all, art is a conversation conducted down through the generations. Meaningful echoes, parody, quotation, rebellion, tribute and pastiche all have their place. Culture, not the individual talent, is the predominant force; in creative writing classes, young writers are told that if they do not read widely, they are more likely to be helplessly influenced by those whose work they do not know.

Such a view of cultural inheritance is naturally friendly to science.