An Interview with Margarethe von Trotta on Her Upcoming Film About Hannah Arendt

8898401-STANDARDOver at the Goethe Institute:

Thinking and writing, those are the things that really defined the great philosopher Hannah Arendt. The objective of the film was to transform this thought into a film, to make it a visual embodiment of a real person.

How does one use film to describe a woman who thinks? How can we watch her while she thinks? That is of course the big challenge when making a film about intellectual personalities. I insisted that Barbara Sukowa play Hannah because she is the only actress I know who I could imagine showing me how someone thinks, or that someone is thinking. And she managed to do it. For me, it was clear from the beginning that she was the one, and I had to push for her to get the role because some of the investors couldn’t visualize it. I said to them, “I am not doing this film without her.” I had the same situation with Rosa Luxemburg and again with Hildegard von Bingen – she really experienced the intellectual nature of Rosa’s political speeches, for example. That is how it is with Hannah Arendt. The viewer has to see that she is really thinking. She does two speeches in this film as well. Arendt was a professor at various universities in the United States and she did seminars and speeches on philosophical and political subject matter. In situations like that, it’s not about just reading your lines. You have to be able to improvise and develop the speech as you go. In the film there is a six-minute speech in English, with the strong German accent that Arendt had, and Sukowa is able to get viewers to experience, think and follow her analyses.

What were the preparations for the film like? And what about your contact with Arendt’s world?

Before we started writing the screenplay we met with a lot of people in New York who had known Arendt well on a personal level. People like Lotte Köhler, her longtime colleague and friend who died in 2011 at the age of 92, or Elisabeth Young-Bruehl, who also died in 2011, as well as others like Lore Jonas, widow of Hans Jonas, and Jerome Kohn, her last assistant and publisher of her posthumous writings. Those were amazing encounters, the stuff you need when you are writing a script about this type of real person who you’ve never met yourself.

Hilton Kramer, 1928-2012

Kramer1-articleInlineWilliam Grimes in the NYT:

Admired for his intellectual range and feared for his imperious judgments, Mr. Kramer emerged as a critic in the early 1950s and joined The Times in 1965, a time when the tenets of high modernism were being questioned and increasingly attacked. He was a passionate defender of high art against the claims of popular culture and saw himself not simply as a critic offering informed opinion on this or that artist, but also as a warrior upholding the values that made civilized life worthwhile.

This stance became more marked as political art and its advocates came to the fore, igniting the culture wars of the early 1980s, a struggle in which Mr. Kramer took a leading role as the editor of The New Criterion, where he was also a frequent contributor.

In its pages, Mr. Kramer took dead aim at a long list of targets: creeping populism at leading art museums; the incursion of politics into artistic production and curatorial decision making; the fecklessness, as he saw it, of the National Endowment for the Arts; and the decline of intellectual standards in the culture at large.

A resolute high modernist, he was out of sympathy with many of the aesthetic waves that came after the great achievements of the New York School, notably Pop (“a very great disaster”), conceptual art (“scrapbook art”) and postmodernism (“modernism with a sneer, a giggle, modernism without any animating faith in the nobility and pertinence of its cultural mandate”).

At the same time, he made it his mission to bring underappreciated artists to public attention and open up the history of 20th-century American art to include figures like Milton Avery and Arthur Dove, about whom he wrote with insight and affection.

the fate of the western

Winchester_73

However much certain optimists may talk about the survival or possible resurrection of the Western, I fear—much to my regret—that, as a genre, it is pretty well dead and buried, a relic of a more credulous, more innocent, more emotional age, an age less crushed or suffocated by the ghastly plague of political correctness. Nonetheless, whenever a new Western comes out, I dutifully go and see it, albeit with little expectation that it will be any good. In the last decade, I can recall three pointless remakes, vastly inferior to the movies on which they were modelled and which weren’t exactly masterpieces themselves: 3:10 to Yuma by James Mangold, The Alamo by John Lee Hancock, and True Grit by the Coen brothers, all of them uninspired and unconvincing, and far less inspired than the distinctly uneven originals made, respectively, by Delmer Davies, John Wayne, and Henry Hathaway. I recall, too, Andrew Dominik’s interesting but dull The Assassination of Jesse James by the Coward Robert Ford, Ed Harris’s bland, soulless Appaloosa, David von Ancken’s unbearable Seraphim Falls, and the Australian John Hillcoat’s The Proposition, of which my memory has retained not a single image. The only recent Westerns that have managed to arouse my enthusiasm have been those made for TV: Walter Hill’s Broken Trail, and Deadwood, whose third and final season no one has even bothered to bring out on DVD in Spain, which gives you some idea of how unsuccessful the magnificent first two series must have been. In my view, Kevin Costner’s Open Range, which came out slightly earlier, was the last decent Western to be made for the big screen, even though it has long been fashionable to denigrate anything this admirable actor and director does.

more from Javier Marías at Threepenny Review here.

whoever we may be, we are aliens too

Image

Vincent Gallo is one of the most disliked of current film actors, while George Clooney is one of the most admired, but most viewers of Essential Killing—American, Belgian, Sri Lankan, or Japanese—probably have more in common with Gallo’s “Mohammed” than they have with Clooney. Anyone can be targeted, victimized, have their eardrums blasted out, be forced to hide and kill in order to survive. All these are possibilities of human existence that, at the advanced stage of civilization we enjoy, are available to everyone. But to be George Clooney? He may make it look easy. It’s in the voice, however, that the deceptive quality of the Clooney figure can best be detected. Clooney, who is from Lexington, Kentucky, speaks with an unmarked accent, an accent of zero. His vocal deadpan (so soothing in Wes Anderson’s Fantastic Mr. Fox [2009]) projects a reasonableness and an authority that do not impose themselves through any apparent violence. When he talks, it’s as if he were saying nothing. Such a talent makes him indeed The American.

more from Chris Fujiwara at n+1 here.

first act is final curtain

Ec9fabe6-7489-11e1-9951-00144feab49a

It’s impossible to know how Francesca Woodman’s photographs would strike us if she hadn’t thrown herself out of a window at 22. Her suicide makes every image feel portentous. Each is a memento mori, a harbinger of imminent death. She specialised in self-portraits and the suite of choreographed scenes she shot with a timer or a remote trigger seems in retrospect a record of her unravelling. We rarely see her face. She bleeds into the background in very long exposures and disappears into crumbling walls. Her limbs vanish behind wallpaper and blur into architecture. Her flesh is barely solid, melting into mist and yielding to the rigid surface of a windowpane. The new exhibition of her work at New York’s Guggenheim Museum prompts a series of unanswerable questions. Would Woodman’s fierce self-scrutiny have ebbed with maturity or would it have inflected her entire career? Did the monomaniacal intensity of her work propel her towards death?

more from Ariella Budick at the FT here.

More than Health Insurance

From The New Yorker:

Health-care-supreme-court-protestOn Monday, the case of the century got even bigger. In challenges to the Affordable Care Act in lower courts, several judges gave the Supreme Court an escape hatch. These judges, including Brett Kavanaugh, a young judge sure to make Republican short lists for the Supreme Court, said that the Justices should kick the can down the road and put off a decision for a year or two. Specifically, Kavanaugh said that the Tax Anti-Injunction Act (a deeply obscure law) compelled the Justices to put off a decision on the law until it takes full effect, in 2014.

Across the ideological spectrum, the Justices, through their questions to the lawyers arguing for and against the upholding the A.C.A., declined the invitation for delay. They all (that is, the eight who asked questions; Clarence Thomas did not) seemed to recognize that there were legal and prudential reasons to resolve this issue now. As Justice Ruth Bader Ginsburg said, the act “does not apply to penalties that are designed to induce compliance with the law, rather than to raise revenue. And this is not a revenue-raising measure because, if it’s successful, they—nobody will pay the penalty, and there will be no revenue to raise.” The Court, it now seems clear, will decide this case on the merits.

More here.

At Bottom of Pacific, Director Sees Dark Frontier

From The New York Times:

CamNo sea monsters. No strange life. No fish. Just amphipods — tiny shrimplike creatures swimming across a featureless plane of ooze that stretched off into the primal darkness. “It was very lunar, a very desolate place,” James Cameron, the movie director, said in a news conference on Monday after completing the first human dive in 52 years to the ocean’s deepest spot, nearly seven miles down in the western Pacific. “We’d all like to think there are giant squid and sea monsters down there,” he said, adding that such creatures still might be found. But on this dive he saw “nothing larger than about an inch across” — just the shrimplike creatures, which are ubiquitous scavengers of the deep.

His dive, which had been delayed by rough seas for about two weeks, did not go entirely as planned: his submersible’s robot arm failed to operate properly, and his time at the bottom was curtailed from a planned six hours to about three. It was not entirely clear why. But he did emerge safely from the perilous trip, vowing to press on. The area he wants to explore, he said, was 50 times larger than the Grand Canyon. “I see this as the beginning,” Mr. Cameron said. “It’s not a one-time deal and then you move on. It’s the beginning of opening up this frontier.” National Geographic, which helped sponsor the expedition to the area known as the Challenger Deep, said that Mr. Cameron, the maker of the movies “Avatar” and “Titanic,” began his dive on Sunday at 3:15 p.m. Eastern Daylight Time, landed on the bottom at 5:52 p.m. and surfaced at 10 p.m. He conducted the news conference via satellite as he was being rushed to Guam in the hope of reaching London for the debut on Tuesday of “Titanic 3-D.”

More here.

Monday, March 26, 2012

Sunday, March 25, 2012

The Originality of the Species

Atoms-artwork-008Ian McEwan in The Guardian (via Mark Trodden):

[T]he modern artefact bears the stamp of personality. The work is the signature. The individual truly possesses his or her own work, has rights in it, defines himself by it. It is private property that cannot be trespassed on. A great body of law has grown up around this possessiveness. Countries that do not sign up to the Berne Convention and other international agreements relating to intellectual property rights find themselves excluded from the mainstream of a globalised culture. The artist owns his work, and sits glowering over it, like a broody hen on her eggs. We see the intensity of this fusion of originality and individuality whenever a plagiarism scandal erupts. (I’ve had some experience of it myself.)

The dust-jacket photograph, though barely relevant to an appreciation of a novel, seals the ownership. This is me, it says, and what you have in your hands is mine. Or is me. We see it too in the cult of personality that surrounds the artist – individuality and personality are driven to inspire near-religious devotion. The coach parties at Grasmere, the cult of Hemingway, or Picasso, or Neruda. These are big figures – their lives fascinate us sometimes even more than their art.

This fascination is relatively new. In their day, Shakespeare, Bach, Mozart, even Beethoven were not worshipped, they did not gleam in the social rankings the way their patrons did, or in the way that Byron or Chopin would do, or in the way a Nobel Prize-winner does today. How the humble artist was promoted to the role of secular priest is a large and contentious subject, a sub-chapter in the long discussion about individuality and modernity. The possible causes make a familiar list – capitalism, a growing leisured class, the Protestant faith, the Romantic movement, new technologies of communication, the elaboration of patent law following the Industrial Revolution. Some or all of these have brought us to the point at which the identification of the individual and her creativity is now complete and automatic and unquestionable. The novelist today who signs her name in her book for a reader, and the reader who stands in line waiting for his book to be signed collude in this marriage of selfhood and art.

There is an antithetical notion of artistic creation, and though it has been expressed in different forms by artists, critics and theoreticians, it has never taken hold outside the academies. This view holds that, of course, no one escapes history. Something cannot come out of nothing, and even a genius is bound by the constraints and opportunities of circumstance. The artist is merely the instrument on which history and culture play. Whether an artist works within his tradition or against it, he remains its helpless product. The title of Auden’s essay, “The Dyer’s Hand”, is just a mild expression of the drift. Techniques and conventions developed by predecessors – perspective, say, or free indirect style (the third person narrative coloured by a character’s subjective state) are available as ready-made tools and have a profound effect. Above all, art is a conversation conducted down through the generations. Meaningful echoes, parody, quotation, rebellion, tribute and pastiche all have their place. Culture, not the individual talent, is the predominant force; in creative writing classes, young writers are told that if they do not read widely, they are more likely to be helplessly influenced by those whose work they do not know.

Such a view of cultural inheritance is naturally friendly to science.

An Antimatter Breakthrough

From Liz Mermin's documentary in progress: “On 7 March, the journal Nature published the latest results from the ALPHA experiment at CERN. The findings were called “historic.” ALPHA first made science history in 2010, when they created atoms of anti-hydrogen; in 2011 they succeeded in trapping and holding these atoms for an astonishing 1000 seconds. In these three short films, members of the ALPHA collaboration explain their latest triumph, revealing the excitement behind this extroardinary scientific process.”

Read more »

Cynthia Nixon, Joseph Massad, and Not Being an American Gigolo

FoucaultScott Long in A Paper Bird:

In the politics of identity, bisexuals are hated because they stand for choice. The game is set up so as to exclude the middle; bisexuals get squeezed out. in the “LGBT” word, the “B” is silent. John Aravosis, for instance, says that if you’re into both genders, “that’s fine” — great! — but “most people” aren’t. First off, that rather defies Freud and the theory of universal infantile bisexuality. But never mind that. The business of “outing,” of which Aravosis has been an eloquent proponent, also revolves around the excluded middle. It’s not a matter of what you think of outing’s ethics, on which there’s plenty of debate. It’s that the underlying presumption is that one gay sex act makes you “gay” — not errant, not bisexual, not confused or questioning: gay, gay, gay. I saw you in that bathroom, for God’s sake! You’re named for life! It’s also that the stigma goes one way only: a lifetime of heterosexual sex acts can’t make up for that one, illicit, overpowering pleasure. As I’ve argued, this both corresponds to our own buried sense, as gays, that it is a stigma, and gives us perverse power. In the scissors, paper, rock game of sexuality, gay is a hand grenade. It beats them all.

And this fundamentalism infects other ways of thinking about sexuality, too. Salon today carries an article about multiple sex-and-love partners: “The right wants to use the ‘slippery slope’ of polyamory to discredit gay marriage. Here’s how to stop them.” I’ll leave you to study the author’s solution. He doesn’t want to disrespect the polyamorists:

I reject the tactic of distinguishing the good gays from the “bad” poly people. Further marginalizing the marginalized is just the wrong trajectory for any liberation movement to take.

That’s true — although whether we’re still really a liberation movement, when we deny the liberty of self-description, is a bit doubtful. But he goes on, contemplating how polyamory might in future be added to the roster of rights:

Really, there are a host of questions that arise in the case of polyamory to which we just don’t know the answer. Is polyamory like sexual orientation, a deep trait felt to be at the core of one’s being? Would a polyamorous person feel as incomplete without multiple partners as a lesbian or gay person might feel without one? How many “truly polyamorous” people are there?

Well, what if it’s not? What if you just choose to be polyamorous? God, how horrible! You beast! What can be done for the poor things? Should some researcher start looking for a gene for polyamory, so it can finally become respectable, not as a practice, but as an inescapable doom? (I shudder to think there’s one gene I might share with Newt Gingrich.)

What, moreover, if sexual orientation itself is not “a deep trait felt to be at the core of one’s being,” one that people miraculously started feeling in 1869, when the word “homosexual” was coined? What if it’s sometimes that, sometimes a transient desire, sometimes a segment of growth or adolescent exploration, sometimes a recourse from the isolations of middle age, sometimes a Saturday night lark, sometimes a years-long passion? What if some people really do experience it as … a choice?

What if our model for defending LGBT people’s rights were not race, but religion? What if we claimed our identities were not something impossible to change, but a decision so profoundly a part of one’s elected and constructed selfhood that one should never be forced to change it?

Hey Dude

0212ILIN01Robert Lane Greene in More Intelligent Life (for Sophie Schulte-Hillen):

Slang rarely has staying power. That is part of its charm; the young create it, and discard it as soon as it becomes too common. Slang is a subset of in-group language, and once that gets taken up by the out-group, it’s time for the in-crowd to come up with something new. So the long life of one piece of American slang, albeit in many different guises, is striking. Or as the kids would say, “Dude!”

Though the term seems distinctly American, it had an interesting birth: one of its first written appearances came in 1883, in the American magazine, which referred to “the social ‘dude’ who affects English dress and the English drawl”. The teenage American republic was already a growing power, with the economy booming and the conquest of the West well under way. But Americans in cities often aped the dress and ways of Europe, especially Britain. Hence dude as a dismissive term: a dandy, someone so insecure in his Americanness that he felt the need to act British. It’s not clear where the word’s origins lay. Perhaps its mouth-feel was enough to make it sound dismissive.

From the specific sense of dandy, dude spread out to mean an easterner, a city slicker, especially one visiting the West. Many westerners resented the dude, but some catered to him. Entrepreneurial ranchers set up ranches for tourists to visit and stay and pretend to be cowboys themselves, giving rise to the “dude ranch”.

By the 1950s or 1960s, dude had been bleached of specific meaning. In black culture, it meant almost any male; one sociologist wrote in 1967 of a group of urban blacks he was studying that “these were the local ‘dudes’, their term meaning not the fancy city slickers but simply ‘the boys’, ‘fellas’, the ‘cool people’.”

From the black world it moved to hip whites, and so on to its enduring associations today—California, youth, cool. In “Easy Rider” (1969) Peter Fonda explains it to the square Jack Nicholson: “Dude means nice guy. Dude means a regular sort of person.” And from this new, broader, gentler meaning, dude went vocative.

The Return of Mad Men and the End of TV’s Golden Age

IAndy Greenwald in Grantland:

[L]ike the Komodo dragon or Kirk Cameron, a few Golden Age shows remain in production even if their evolutionary time has passed. Larry David will keep kvetching as long as there's bile in his body, and the brilliant Breaking Bad has one more batch of crystal to cook. But with three full seasons stretching out before us like the red carpet at the Clios, Mad Men will be the last of the Golden Age shows to grace our flat-screens. With a typically outstanding new episode, the first in 17 months, due to premiere on Sunday, it's worth asking: Is it also the best?

The line of inheritance from first to last is almost too neat: David Chase hired Matt Weiner to the Sopranos off of the cigarette-stained spec of Mad Men, a script originally written by Weiner in an aspirational frenzy while toiling on the Bronze Age Ted Danson sitcom Becker. Weiner's infamous penchant for micromanaging and rewriting was learned at the foot of Chase, and Don Draper is a direct descendent of Tony Soprano; the two share a charismatic corruption, the last of the troubled titans. But this is where the comparisons end. The Sopranos, in all its digressive genius, was a show dedicated to the impossibility of change. Season by season, Chase built a red-sauce-spattered shrine to a lifetime of lessons learned on Dr. Melfi-esque couches: that people are who they are, no matter what. At its core, The Sopranos was Chase's grand F.U. to all the hard-worn stereotypes of Television 1.0, the boring brontosaur he'd finally managed to dump in the Meadowlands. There was no hugging in Tony's New Jersey. No learning or smoothing or straightening. Tony Soprano was Tony Soprano: an amiable monster. In the end, Chase argued with nihilistic aplomb, it doesn't much matter how the Satriale sausage was made, just whether it was spicy or sweet. And when he began to feel revulsion toward his audience's bloodlust, he denied them even that: The finale's fade to black ensured Tony would be stuck with himself for eternity. To Chase it was a fate worse than prison or a slug to the head from a mook in a Member's Only jacket; a karmic feedback loop in the shape of an onion ring.

Mad Men is different. It's less dark and more expansive than its ancestor because, unlike Chase, Weiner isn't asking questions that he's already convinced himself can't be answered. Where The Sopranos was angry, Mad Men is curious. Even at his grief-wracked, whiskey-bloated nadir last season, being Don Draper wasn't a life sentence because Don Draper doesn't exist. He's merely a particularly dapper suit that Dick Whitman is trying on for size. On Mad Men, identity is what's fungible, not nature.

The Birangana and the birth of Bangladesh

From Himal Southasian:

The year 1971 was a landmark in Southasian history for many reasons. It included the birth of Bangladesh but also the war fought by Pakistan and India. It was perhaps the only such conflict involving the three most populous Southasian countries, clashing for the first time since the end of colonial rule. High-level politics and the tumultuous times spawned a number of books on war, international relations and human rights. However, an uncanny silence has remained about one aspect of the war – the sexual crimes committed by the Pakistan Army and its collaborators, the Razakar militia, against Bangladeshi women. It is only now, 40 years on, that some of that silence is being broken.

Bina D’Costa’s new Nationbuilding, Gender and War Crimes in South Asia takes on the mammoth task of placing violence against women during the war in a larger political context. While what D’Costa calls the ‘original cartographic trauma’ of the Subcontinent has been well researched, gendered nation-building narratives have been given little consideration. Yet D’Costa proposes that any theorisation of nation-building in post-Partition India and Pakistan, or post-Liberation Bangladesh, is incomplete without a gendered analysis. Recognising that women have largely been silenced by state historiography, feminist scholars and activists in Southasia – Veena Das, Kamla Bhasin, Ritu Menon, Urvashi Butalia – have attempted to explore this sordid aspect of war. That rape has been used as a weapon of war has been well documented. One of the more famous examples is American feminist Susan Brownmiller’s investigation of rapes committed during the two World Wars, in Vietnam and then in Bangladesh, which emerged as the 1975 classic Against Our Will: Men, women and rape. The idea of defiling the enemy population by raping its women and impregnating them, often while their helpless and ‘feminised’ menfolk watch, is based on notions of honour, purity and emasculating the opposition. These notions of defilement also led to the sacrificial killing, sometimes by their own families, of women who had either been raped or even simply exposed to the potential of sexual violence.

More here.

The Chemistry of Tears

From The Telegraph:

Daniel_main_2174773bIt is not an exaggeration to say that Peter Carey has given new meaning to the term “historical fiction”. Nowadays novels set in the past are the norm; they seem likely to outnumber those set in the present. In the Eighties, when Carey started writing them, they constituted a separate genre. His early novels were genuinely innovative, and played a large part in that transformation. Impressively, he continues to produce another masterclass every couple of years. His modus operandi is to intertwine his unique fictions with historical documents – from Edmund Gosse’s autobiography in Oscar and Lucinda (1988), to the work of Alexis de Tocqueville in Parrot and Olivier in America 20 years later, most audaciously Great Expectations in Jack Maggs, most spectacularly Ned Kelly’s letters in True History of the Kelly Gang. His reshaping of history, particularly Australian history, arriving at assertive postcolonial versions of Australian national identity, is central to his technique.

In this, his 12th novel, imperial patronage takes a bashing and Victoria and Albert are glimpsed in their nighties, but the seed of historical truth is the 18th-century inventor Jacques de Vaucanson’s mechanical duck. This famed automaton supposedly ate, digested and excreted grain in front of an audience, but was something of a fraud, because its droppings were made in advance. In The Chemistry of Tears, Catherine Gehrig, a conservator at London’s Swinburne Museum, learns of the death of her married lover and colleague. It is 2010, and in the midst of her secret grief Catherine’s boss gives her a mysterious object to reconstruct. It is a copy of the famous duck, commissioned by one Henry Brandling. His notebooks, written in 1854, detail his intention to build Vaucanson’s duck to enliven the spirits of his dangerously ill son, by arousing his “magnetic agitation”, as if the boy himself were an automaton.

More here.

A Boy to Be Sacrificed

Abdellah Taïa in the New York Times:

ScreenHunter_05 Mar. 25 07.46In the Morocco of the 1980s, where homosexuality did not, of course, exist, I was an effeminate little boy, a boy to be sacrificed, a humiliated body who bore upon himself every hypocrisy, everything left unsaid. By the time I was 10, though no one spoke of it, I knew what happened to boys like me in our impoverished society; they were designated victims, to be used, with everyone’s blessing, as easy sexual objects by frustrated men. And I knew that no one would save me — not even my parents, who surely loved me. For them too, I was shame, filth. A “zamel.”

Like everyone else, they urged me into a terrible, definitive silence, there to die a little more each day.

How is a child who loves his parents, his many siblings, his working-class culture, his religion — Islam — how is he to survive this trauma? To be hurt and harassed because of something others saw in me — something in the way I moved my hands, my inflections. A way of walking, my carriage. An easy intimacy with women, my mother and my many sisters. To be categorized for victimhood like those “emo” boys with long hair and skinny jeans who have recently been turning up dead in the streets of Iraq, their skulls crushed in.

The truth is, I don’t know how I survived. All I have left is a taste for silence. And the dream, never to be realized, that someone would save me. Now I am 38 years old, and I can state without fanfare: no one saved me.

More here.

Saturday, March 24, 2012

Robert Reich on Saving Capitalism and Democracy

From The Browser:

ScreenHunter_04 Mar. 24 15.03In a recent post on your website, you said there was “moral rot” in America. And you say: “It’s located in the public behaviour of people who control our economy and are turning our democracy into a financial slush pump.” Can you expand on this?

An economy depends fundamentally on public morality; some shared standards about what sorts of activities are impermissible because they so fundamentally violate trust that they threaten to undermine the social fabric. Without trust it has to depend upon such complex contracts and such weighty enforcement systems that it would crumble under its own weight. What we’ve seen over the last two decades in the United States is a steady decline in the willingness of people in leading positions in the private sector – on Wall Street and in large corporations especially – to maintain those minimum standards. The new rule has become making the highest profits possible regardless of the social consequences.

In the first three decades after World War II – partly because America went through that terrible war and also experienced before that the Great Depression – there was a sense in the business community and on Wall Street of some degree of social responsibility. It wasn’t talked about as social responsibility, because it was assumed to be a bedrock of how people with great economic power should behave. CEOs did not earn more than 40 times what the typical worker earned. Rarely were there mass layoffs by profitable firms. The marginal income tax on the highest income earners in the 1950s was 91%. Even the effective rate, after all deductions and tax credits, was still well above 50%. The game was not played in a cutthroat way. In fact, consumers, workers, the community, were all considered stakeholders of almost equal entitlement as shareholders.

Around about the late 1970s and early 1980s, all of this changed quite dramatically.

More here.

Is Free Will an Illusion?

800px-ToppledominosOver at The Chronicle of Higher Education, Jerry A. Coyne, Alfred R. Mele, Michael S. Gazzaniga, Hilary Bok, Owen D. Jones and Paul Bloom address the question. (Image from wikipedia.) Bloom:

Common sense tells us that we exist outside of the material world—we are connected to our bodies and our brains, but we are not ourselves material beings, and so we can act in ways that are exempt from physical law. For every decision we make—from leaning over for a first kiss, to saying “no” when asked if we want fries with that—our actions are not determined and not random, but something else, something we describe as chosen.

This is what many call free will, and most scientists and philosophers agree that it is an illusion. Our actions are in fact literally predestined, determined by the laws of physics, the state of the universe, long before we were born, and, perhaps, by random events at the quantum level. We chose none of this, and so free will does not exist.

I agree with the consensus, but it's not the big news that many of my colleagues seem to think it is. For one thing, it isn't news at all. Determinism has been part of Philosophy 101 for quite a while now, and arguments against free will were around centuries before we knew anything about genes or neurons. It's long been a concern in theology; Moses Maimonides, in the 1100s, phrased the problem in terms of divine omniscience: If God already knows what you will do, how could you be free to choose?

More important, it's not clear what difference it makes. Many scholars do draw profound implications from the rejection of free will. Some neuroscientists claim that it entails giving up on the notion of moral responsibility. There is no actual distinction, they argue, between someone who is violent because of a large tumor in his brain and a neurologically normal premeditated killer—both are influenced by forces beyond their control, after all—and we should revise the criminal system accordingly. Other researchers connect the denial of free will with the view that conscious deliberation is impotent. We are mindless robots, influenced by unconscious motivations from within and subtle environmental cues from without; these entirely determine what we think and do. To claim that people consciously mull over decisions and think about arguments is to be in the grips of a prescientific conception of human nature.

I think those claims are mistaken. In any case, none of them follow from determinism. Most of all, the deterministic nature of the universe is fully compatible with the existence of conscious deliberation and rational thought.