Guns made civil rights possible: Breaking down the myth of nonviolent change

Charles E. Cobb Jr. in Salon:

We who believe in freedom cannot rest until it comes.
—Ella Baker

Martin_luther_king6I have never subscribed to nonviolence as a way of life, simply because I have never felt strong enough or courageous enough, even though as a young activist and organizer in the South I was committed to the tactic. “I tried to aim my gun, wondering what it would feel like to kill a man,” Walter White wrote of his father’s instruction to shoot and “don’t . . . miss” if a white mob set foot on their property. If I had been in a similar situation in 1960s Mississippi, I would have wrestled with the same doubts that weighed on the young White. But in the final analysis, whatever ethical or moral difficulty I might have had would not have made me unwilling or unable to fire a weapon if necessary. I would have been able to live with the burden of having killed a man to save my own life or those of my friends and coworkers.

It has been a challenge to reconcile this fact with nonviolence, the chosen tactic of the southern civil rights movement of which I was a part. yet in some circumstances, as seen in the pages of this book, guns proved their usefulness in nonviolent struggle. That’s life, which is always about living within its contradictions. More than ever, an exploration of this contradiction is needed. The subjects of guns and of armed self-defense have never been more politicized or more hotly debated than they are today. Although it may seem peculiar for a book largely about armed self-defense, I hope these pages have pushed forward discussion of both the philosophy and the practicalities of nonviolence, particularly as it pertains to black history and struggle. The larger point, of course, is that nonviolence and armed resistance are part of the same cloth; both are thoroughly woven into the fabric of black life and struggle. And that struggle no more ended with the passage of the Civil Rights Acts of 1964 and 1965 than it began with the Montgomery bus boycott, Martin Luther king Jr., and the student sit-ins.

More here.

The Great War and the Remaking of Global Order 1916-1931

Mark Mazower in The Guardian:

Deluge“I believe in American exceptionalism with every fibre of my being,” said President Obama at West Point last month. His speech was a reaffirmation of the US as the indispensable nation, destined to lead the world. We have lived with this kind of rhetoric for a long time now, so long that it seems to have been with us forever. Adam Tooze's new book takes us back a century, to the time when this was all very new. It offers a bold and persuasive reinterpretation of how the US rose to global pre-eminence and along the way it recasts the entire story of how the world staggered from one conflagration to the next.

In the late 19th century, the world was dominated by imperial European great powers, happily carving up between them any available territories in Africa and Asia. The United States watched from the sidelines, a bit-player in international affairs, the energies of its politicians dedicated to overcoming the bitter internal legacy of the civil war. With a negligible navy and a tiny diplomatic service, it was scarcely a power of even the second rank, and, apart from the unfortunate inhabitants of Cuba and the Philippines, people around the world could live their entire lives in ignorance of the Stars and Stripes. Tooze shows, more emphatically than any other scholar I have read, how decisively and how sweepingly the first world war ended this state of affairs. In the midst of the war, financial and naval power in particular moved across the Atlantic never to return. In this situation, Woodrow Wilson did not seek merely to replace the British as the hegemon of a liberal trading order, as historians used to tell us. Rather, he wanted to move the international system as a whole beyond the practices of imperial great-power rivalry that he blamed for the war itself.

More here.

The Letters of Robert Frost: Volume 1, 1886-1920

22LOGAN-thumbStandard-v2William Logan at The New York Times:

In the early fall of 1912, a blandly handsome, tousle-headed American schoolteacher arrived in London. Nearing 40, coming without introduction or much of a plan — except, as he later confessed, “to write and be poor” — he was making a last attempt to write himself into poetry. It would have taken mad willfulness to drag his wife and four children out of their ­settled New Hampshire life in a quixotic assault on the London literary scene. Still, he was soon spending a candlelit evening with Yeats in the poet’s curtained rooms, having come to the attention of that “stormy petrel” Ezra Pound, who lauded him in reviews back home. Little more than two years later, the schoolteacher sailed back, having published his first two books, “A Boy’s Will” (1913) and “North of Boston” (1914). He had become Robert Frost.

The modernists remade American poetry in less than a decade, but like the Romantics they were less a group than a scatter of ill-favored and sometimes ill-tempered individuals. Frost was in most ways the odd man out: He despised free verse, had only a patchy education and wrote about country life. He knew the dark and sometimes terrible loneliness that descended upon stonewalled farms and meager villages. Looking back on his work, this throwback to Chaucer and Virgil plaintively asked one of his correspondents, “Doesnt [sic] the wonder grow that I have never written anything or as you say never published anything except about New England farms?” (“North of Boston” was originally titled “Farm Servants and Other People.”)

more here.

THE HISTORIES OF HERODOTUS, TRANSLATED BY TOM HOLLAND

9780713999778HSteve Donoghue at The Quarterly Conversation:

For centuries, men of letters and plenty of his fellow historians took great pleasure in reducing the prototypical chronicler, Herodotus of Halicarnassus, to the status of a mere wonder-monger, the garrulous and credulous counter-weight to the austere objectivity of his younger contemporary and immediate successor, Thucydides. In fact, it was a thinly veiled slight in Thucydides’s great work on the Peloponnesian War that got the tradition of Herodotus-bashing started; after that, a bitterly moralizing essay by Plutarch kept it going, it flourished in the Renaissance, and it persisted into modern times. Even fifty years ago, the great classicist Peter Green was gently mocking the standard reduction of “The Father of History”:

Here is Herodotus: a garrulous, credulous collector of sailors’ stories and Oriental novelle, ahistorical in method, factually inaccurate, superstitious and pietistic, politically innocent, his guiding motto cherchez la femme et n’oubliez pas le Dieu

more here.

World War I: The War That Changed Everything

BN-DJ026_WWI06_G_20140620155550Margaret Macmillan at the WSJ:

A hundred years ago next week, in the small Balkan city of Sarajevo, Serbian nationalists murdered the heir to the throne of Austria-Hungary and his wife. People were shocked but not particularly worried. Sadly, there had been many political assassinations in previous years—the king of Italy, two Spanish prime ministers, the Russian czar, President William McKinley. None had led to a major crisis. Yet just as a pebble can start a landslide, this killing set off a series of events that, in five weeks, led Europe into a general war.

The U.S. under President Woodrow Wilson intended to stay out of the conflict, which, in the eyes of many Americans, had nothing to do with them. But in 1917, German submarine attacks on U.S. shipping and attempts by the German government to encourage Mexico to invade the U.S. enraged public opinion, and Wilson sorrowfully asked Congress to declare war. American resources and manpower tipped the balance against the Central Powers of Germany and Austria-Hungary, and on Nov. 11, 1918, what everyone then called the Great War finally came to an end.

more here.

Rabbits are not the only meat, so why the fury over Jeanette Winterson’s fluffy meal?

Jeremy Strong in The Conversation:

Zzt5d3b-1403177020Jeanette Winterson’s trapping and cooking of a rabbit has led the author into a heated dispute. Posting an image of the skinned animal on her kitchen counter with the comment “Rabbit ate my parsley. I am eating the rabbit” caused such a furore that she was invited on to BBC Radio 4’s World at One to defend her position. “I’ve had at least 100 tweets a minute since Sunday, I’m deluged with it”, she said.

As Winterson observes of those who have branded her “sick”, the problem seems not to be that she has prepared an animal for the pot, but that she undertook all the stages (capture, slaughter, preparation, cooking) herself. What’s remarkable about this is that the act concerned was once – and not all that long ago – utterly unremarkable.

The way we think about food appears to have completely changed. The plethora of glossy television programmes about cooking elide a key fact: that the regular practice of household cookery, let alone hunter-gathering, continues to decline. (Joanna Blythman’s book Bad Food Britain offers a useful summary of this national malaise.)

There’s an irony here that’s hard to miss.

More here.

Carl Zimmer: This Is Your Brain on Writing

From the New York Times:

Tumblr_static_writing450A novelist scrawling away in a notebook in seclusion may not seem to have much in common with an NBA player doing a reverse layup on a basketball court before a screaming crowd. But if you could peer inside their heads, you might see some striking similarities in how their brains were churning.

That’s one of the implications of new research on the neuroscience of creative writing. For the first time, neuroscientists have used fMRI scanners to track the brain activity of both experienced and novice writers as they sat down — or, in this case, lay down — to turn out a piece of fiction.

The researchers, led by Martin Lotze of the University of Greifswald in Germany, observed a broad network of regions in the brain working together as people produced their stories. But there were notable differences between the two groups of subjects. The inner workings of the professionally trained writers in the bunch, the scientists argue, showed some similarities to people who are skilled at other complex actions, like music or sports.

The research is drawing strong reactions. Some experts praise it as an important advance in understanding writing and creativity, while others criticize the research as too crude to reveal anything meaningful about the mysteries of literature or inspiration.

More here.

On Supporting England

Peter Pomerantsev in the London Review of Books:

121110541_01‘Will you be supporting England?’ English people have asked me ever since I can remember. My family moved to London from Kiev in 1980, when I was three. The question might look like a football version of Norman Tebbit’s ‘cricket test’. But one of the earliest things I worked out as an immigrant child is that you’re not meant to rush into English identity. You can do that in the US maybe, but it would be utterly un-English to try to be too English too fast. ‘Who do you support?’ is a bit of a trick question.

I’d felt this instinctively early on but Dr Douek, who cut out my tonsils when I was 18, helped me understand it in a more structured way. ‘The English are very accepting, as long as you don’t try to be pukkah English,’ he said, as he peered into my throat. He too had come here as a child, part of a family of Hungarian Jews who fled the Nazis. By the time of my operation Douek was well into retirement age but still spoke with a slight Mitteleuropean accent, which I suspect may have been affected, to show he wasn’t trying to be ‘pukkah’.

We agreed that becoming English took three generations. Someone born outside England was ‘from Russia/Hungary’. The second generation, born in England, might say they were English ‘but of Russian/Hungarian parentage’ (the ‘but’ is crucial). The third generation was pretty much English.

More here.

Casual Sex Is Actually Excellent for You, If You Love Casual Sex

Casual-sex

Ryan Jacobs in Pacific Standard (Photo: Patricia Chumillas/Shutterstock):

42 percent of subjects reported having sex outside a relationship. When it came to those who were sociosexually unrestricted, having casual sex was associated with higher self-esteem and life satisfaction and lower depression and anxiety. “Typically, sociosexually unrestricted individuals (i.e., those highly oriented toward casual sex) reported lower distress and higher thriving following casual sex, suggesting that high sociosexuality may both buffer against any potentially harmful consequences of casual sex and allow access to its potential benefits,” the researchers write. Additionally, feelings of authenticity “amplified” the beneficial psychological effects, but did not spur them, as hypothesized. Surprisingly, the researchers did not find any negative effects on well-being in those who were sociosexually restricted but had casual sex anyway. This might be due to limited sample size, though, since not very many restricted subjects did this.

“This study certainly seems to suggest that casual sex can be a good thing for people who are open to it, desire it, and have positive attitudes towards it,” Vrangalova says in an email to Pacific Standard. “And it is always a good idea to be safe while doing it and not get too wasted – other research shows that a lot of the guilt following casual sex comes from failure to [use] condoms or getting too drunk.”

Still, these findings do not imply that “casual sex is better than relationship sex, even for unrestricted people,” Vrangalova cautions. “The vast majority of unrestricted people desire, enjoy, and form relationships; they just also enjoy and desire casual sex.”

Not unexpectedly, the types of people who constantly desire casual sex sound a bit insufferable. They are generally “extroverted,” sensation-seeking, “impulsive,” “avoidantly attached” males, who “also invest less in romantic relationships and are more likely to have cheated on a romantic partner (perhaps because monogamous arrangements are less well-suited for them),” Vrangalova says. “Among men, they are also more likely to be physically strong, and especially among college men, also more sexist, manipulative, coercive and narcissistic.” They also tend to be “unconventional, attractive, [and] politically liberal.”

Crucially, the study proves that casual sex is much more textured and complex than previous research has let on.

More here.

Why Does the Belief that Women are Safest when Secluded Still Hold Sway in India?

Bahadur_indiasmissingwomen_ba_img

Gaiutra Bahadur in The Nation:

The belief that women are safest when secluded still holds sway in India. On the sleek Delhi metro, there are cars exclusively—though not compulsory—for women, and at their entrance guards outfitted in navy-blue saris stand sentinel to deter male passengers from entering them, whether by mistake or to make mischief. The threat of sexual harassment, from incidents of aggressive ogling or groping (known euphemistically as “Eve teasing”) to rape, discourages women from venturing out alone, especially at night. And if, having gone out, they are harassed or assaulted, they are often told it was their fault. In 2008, a mob molested two Indian-American women as they left a Mumbai hotel after midnight for a New Year’s stroll with their husbands. The chairman of a state human rights commission said of the incident, “Yes, men are bad…. But who asked [the women] to venture out in the night…. Women should not have gone out in the night and when they do, there is no point in complaining that men touched them and hit them.”

The history of publicly engaged women in India—especially those from elite backgrounds, such as Rokeya Sakhawat Hossain and her publisher Sarojini Naidu, the independence leader and future president of the Indian National Congress—is long and vibrant. What’s relatively new are the employment, educational and leisure opportunities that globalization has created for middle-class and lower-middle-class Indian women, who by working in offices, commuting on trains or buses, or shopping in cafes and malls have staked a certain claim. The ranks of women with roles outside the home—once filled mostly by those at the very bottom of the socioeconomic ladder or by those, like Hossain, at the top—are expanding, fulfilling the prophesy of Ladyland at least slightly. Yet Hossain’s dream of freedom from crime remains unrealized in India.

There, as across much of the world, violence against women appears to be escalating. The number of reported rapes in India has surged by 792 percent in the past four decades, making it the nation’s fastest-growing crime. To an extent, the statistics reflect greater reporting, but they also point to a substantive issue.

More here.

The Disruption Machine

140623_r25161_p233

Jill Lepore in The New Yorker (Illustration by Brian Stauffer):

The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.

The word “innovate”—to make new—used to have chiefly negative connotations: it signified excessive novelty, without purpose or end. Edmund Burke called the French Revolution a “revolt of innovation”; Federalists declared themselves to be “enemies to innovation.” George Washington, on his deathbed, was said to have uttered these words: “Beware of innovation in politics.” Noah Webster warned in his dictionary, in 1828, “It is often dangerous to innovate on the customs of a nation.”

The redemption of innovation began in 1939, when the economist Joseph Schumpeter, in his landmark study of business cycles, used the word to mean bringing new products to market, a usage that spread slowly, and only in the specialized literatures of economics and business. (In 1942, Schumpeter theorized about “creative destruction”; Christensen, retrofitting, believes that Schumpeter was really describing disruptive innovation.) “Innovation” began to seep beyond specialized literatures in the nineteen-nineties, and gained ubiquity only after 9/11. One measure: between 2011 and 2014, Time, the Times Magazine,The New Yorker, Forbes, and even Better Homes and Gardens published special “innovation” issues—the modern equivalents of what, a century ago, were known as “sketches of men of progress.”

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

More here.

Friday, June 20, 2014

Hummingbirds have been slow to give up their secrets, but slowly, we’ve learned to understand them

Bernd Brunner in The Smart Set:

ScreenHunter_703 Jun. 20 19.50Some hummingbirds are no larger than a thumb, and the smallest among them are the very smallest birds in existence. Yet it’s hard to avoid superlatives when talking about these tiny creatures. With their often magnificent jewel-like colors, they glimmer like finely wrought works of art. In fact, they are miracles of nature: extremely agile, fast-moving animals that take the characteristics of birds to their utmost limit. Combining dynamism, fragility, and a surprising degree of fearlessness, hummingbirds can be found in the most diverse environments: in tiny front yards in North, Central, and South American cities; on the high plateau of the Andes; and in the dense Amazon forests.

The very first mention of hummingbirds by a European probably occurred in the accounts of Jean de Léry, a French sailor and explorer. De Léry was part of a group of mariners sent to the Brazilian coast in 1556. His 1557 Histoire d’un voyage fait en la terre du Brésil, autrement dite Amérique contains a number of observations about the inhabitants, flora, and fauna of this new continent, completely unknown to the readers. Throughout the two centuries after de Léry, a range of authors mentioned hummingbirds, but a systematic framework for their observations was still a long way off. George Marcgrave, who traveled to Brazil in 1638, described several hummingbird species in his Historia Rerum Naturalium Brasiliae, published in Amsterdam ten years later. Soon the birds were popping up in all sorts of contexts — their unusual features were always worth an anecdote. In his Mundus Mirabilis Tripartitus (1689), one of the compendia of all sorts of natural curiosities popular at the time, the German Eberhard Werner Happel speaks of a “little bird in its shining little plumage” that lives in the “New Netherlands”:

It is barely the length of a thumb and sucks from the flowers like a bee …. Another type of this most beautiful bird is found on the islands of the Antilles, but especially on the island Anegada. Its body is not much larger than that of a beetle, covered with colorful feathers like a rainbow, and its neck is decorated with a little ruby-red ring. The wings appear as if gilded on the underside and the gold-green head wears a tiny cap or hood.

Hummingbirds inspired Hector St. John de Crevecoeur, one of the first American naturalists, to a stirring comparison in his Letters from an American Farmer (1782): “Where do passions find room in so diminutive a body? They often fight with the fury of lions, until one of the combatants falls a sacrifice and dies.”

More here.

VLADIMIR NABOKOV’S UNPUBLISHED ‘LOLITA’ SCREENPLAY NOTES

Blake Bailey in Vice:

ScreenHunter_702 Jun. 20 19.38Vladimir Nabokov’s Lolita was first published in 1955, as part of the Paris-based Olympia Press Traveler’s Companion books, a series of louche and sometimes avant-garde fiction. Lolita was both: A rapturous first-person account of a middle-aged European’s passion for a prepubescent “nymphet,” the novel was resplendent with Nabokov’s usual wordplay, puzzles, and recondite allusions. Quite apart from its erotic content, Lolita would seem an unlikely best seller in Eisenhower’s America, but when Putnam published an edition in 1958, it sold faster than any American novel since Gone with the Wind. A month later, Stanley Kubrick bought the film rights for $150,000, despite the considerable challenge of making a movie that would satisfy the censors. Meeting with Nabokov the following summer, in 1959, Kubrick tried to entice the great Russian-American novelist to write the screenplay himself. Nabokov gave the matter some thought, but finally declined. “A particular stumbling block,” his wife, Véra, wrote Kubrick’s partner, James Harris, was “the [filmmakers’] idea of having the two main protagonists”—Lolita Haze and her 40-something lover, Humbert Humbert—“married with an adult relative’s blessing.”

A few months later, back in Europe, Nabokov “experienced a small nocturnal illumination” as to how he might fruitfully proceed with an adaptation of Lolita—whereupon, as if by magic, a telegram from Kubrick materialized: “Convinced you were correct dislike marriage Stop Book a masterpiece and should be followed even if Legion and Code disapprove Stop Still believe you are only one for screenplay Stop If financial details can be agreed would you be available.” Hollywood agent Irving “Swifty” Lazar negotiated a deal whereby Nabokov would receive $40,000 for writing the screenplay and an additional $35,000 if he received sole credit, and in March 1960 the novelist came to California and rented a villa in Brentwood Heights. As he later recalled in his foreword to the published screenplay, “Kubrick and I, at his Universal City studio, debated in an amiable battle of suggestion and countersuggestion how to cinemize the novel. He accepted all my vital points, I accepted some of his less significant ones.” Meanwhile, with the help of Lazar and his wife, the Nabokovs were introduced to the Hollywood cocktail circuit. “I’m in pictures,” John Wayne explained when Nabokov cordially inquired about his line of work.

More here.

Is the world itself a mathematical structure?

Jeremy Butterfield in +Plus Magazine:

ScreenHunter_701 Jun. 20 19.28There is no doubt that the history of science, and especially of physics, provides countless illustrations of the power of mathematical language to describe natural phenomena. Famously, Galileo himself — the founding father of the mathematical description of motion — envisaged describing many, perhaps all, phenomena in mathematical terms. Thus in The Assayer he wrote the following (saying “philosophy” in roughly the sense of our words “natural science” or “physics”):

“Philosophy is written in this grand book — I mean the universe — which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering about in a dark labyrinth.”

Of course, since Galileo's time the language of mathematics has developed enormously — in ways that even he, a genius, would have found unimaginable. This lends credence to the claim that maths is more than just a tool; that it is embedded deeper in the nature of reality. Some people have taken this idea to the extreme: they suggest that the Universe itself is a mathematical structure.

How can this be? One line of reasoning, which has been taken by the physicist Max Tegmark in his book Our mathematical Universe, starts with the premise that external reality is completely independent of us humans. If this is true, then external reality must have a description which is utterly free of subjective ingredients: that is, utterly free of factors arising from biological facts about human cognition, or cultural facts, or facts about an individual human's psychology.

More here.

The Shame of Shuhada Street

Ayelet Waldman in The Atlantic:

ScreenHunter_700 Jun. 20 19.22I first saw the boys through the rearview mirror of the car I was riding in, as they approached Shuhada Street. One of them was about the age of my daughter, who became a bat mitzvah last week. The other might have been 16 or so, like my older son. The boys hesitated at the top of the street and seemed to take a breath. Then they stepped into the void.

Shuhada Street, lined with small shops whose owners typically lived upstairs, was once among the busiest market streets in this ancient city. But in 1994, in response to a horrific massacre that left 29 people dead and 125 injured, the Israel Defense Forces began clamping down on Shuhada Street. They welded shut the street-facing doors of all the homes and shops, and by the time of the Second Intifada in 2000, had turned the bustling thoroughfare into a ghost street on which no one was permitted to set foot. No one, that is, who is Palestinian. Israeli Jews and foreign visitors are free to come and go along the road—to snap photos and make their way to Hebron’s three Jewish settler outposts, Beit Hadassah, Beit Romano, and Avraham Avinu. But there is nothing to buy, nothing to see, no reason to tarry. The stores are all closed. The few Palestinians who remain have been barred from the street where they live. If they want to enter their homes, they must do so through back doors, which in many cases involves clambering over rooftops.

One might be tempted to view Shuhada Street as just another casualty in an endless cycle of violent retribution. A Palestinian kills dozens of Hebron’s Jews, so Israel punishes the Palestinians of Hebron by closing Shuhada Street. But that is not, in fact, what happened. The victims of the massacre that impelled the Israeli government to shutter Shuhada were not Jews. They were Palestinians—unarmed Palestinians gunned down as they prayed at the nearby Cave of the Patriarchs by Baruch Goldstein, an American-born Jewish zealot with Israeli military training and a Galil assault rifle, who stopped firing only when he was overcome and killed by survivors of his attack. You can add Shuhada Street, and the vibrant urban life it once sustained and embodied, to the list of Goldstein’s victims.

More here.

Richard Linklater’s “Boyhood”

ArticleAmy Taubin at Artforum:

TIME FLIES in Richard Linklater’s Boyhood, which is both a conceptual tour de force and a fragile, unassuming slice of movie life. Two hours and forty minutes in length, it depicts the maturation of a boy named Mason (Ellar Coltrane) from a six-year-old child into an eighteen-year-old young adult. There has never been a fiction film quite like it.

“‘The clay of cinema is time.’” Tarkovsky’s axiom, paraphrased by Linklater in a conversation we had recently over the phone, has guided the director ever since Slacker (1991)—as has his own corollary that a film should be “locked in the moment and place of its making.” Linklater’s second feature,Slacker was emblematic of a generation—and of a promising moment in American independent film, when a handful of directors eschewed Hollywood production values and conventional dramatic structure to combine the influences of European art cinema with distinctly American imagery and culture. Set in Austin, where, in 1985, Linklater founded a film society in order to show such personal favorites as Tarkovsky, Bresson, Godard, and James Benning, Slacker perambulates a mile-long strip bordering the University of Texas campus, connecting by happenstance more than fifty incidents and roughly a hundred characters within a single day. In 1991, when Todd Haynes’s Poison won the grand prize at the Sundance Film Festival, jury member Gus Van Sant said that his vote had gone to Slacker. Haynes puts his formalism up front; Linklater buries his in the bedrock of his narratives. And if Jim Jarmusch is the post-Beat cinematic bard of rust-belt bohemians and downtown hipsters, then Linklater is the Longfellow of a less glamorous alt-culture—one that could pass for mainstream America, whatever that is. Jarmusch’s protagonists are loners. Linklater creates characters who marry, have kids, divorce, have jobs, and struggle to pay the rent and child support. He is a visionary of everyday life.

more here.

Charles Krauthammer floats like a vulture

Scialabba_floatslikeavulture_ba_imgGeorge Scialabba at The Nation:

Still, some deep and troubling questions lurk beneath Krauthammer’s crass celebration of America’s manifest destiny. The flowering of equality, self-reliance and civic virtue in the nonslave states from the mid-eighteenth to the mid-nineteenth century is one of the political wonders of the world, a signal achievement in humankind’s moral history. It was made possible by a great crime: it all took place on stolen, ethnically cleansed land. Likewise that other pinnacle of political enlightenment, Athenian democracy, which rested on slavery. But in both cases, didn’t the subordination or expropriation of the many allow the few to craft social relations from which the rest of the world has learned invaluable lessons? Is some such stolen abundance or leisure a prerequisite of moral and cultural advance? Even if we acknowledge the dimensions of the crime, can we really regret the achievement? Krauthammer is no help in answering such questions, but he is clever enough (and truculent enough) to force them on our attention.

Krauthammer is an unapologetic, even strident hawk. He chides Jeane Kirkpatrick, no less, for suggesting that after the breakup of the Soviet Union, the United States might become “a normal country in a normal time.” On the contrary, he admonishes, we live in a permanently abnormal world. “There is no alternative to confronting, deterring and, if necessary, disarming states that brandish and use weapons of mass destruction. And there is no one to do that but the United States,” with or without allies. Of course, there is no question of deterring or disarming the United States. The very idea is outlandish: America is uniquely benign and that “rarest of geopolitical phenomena,” a “reluctant hegemon” almost quixotic in its “hopeless idealism.” Only twisted leftists would deny that freedom is “the centerpiece” of American foreign policy.

more here.

The Great Barrier Reef from Captain Cook to Climate Change

Hamilton_06_14James Hamilton-Paterson at Literary Review:

Since 1770, when Captain Cook blundered into it in Endeavour and came to grief, Australia's Great Barrier Reef has gone from a navigator's nightmare through being a World Heritage treasure to its present status of moribund paradise. On the way it had to overcome its early 19th-century reputation as the lair of savage Aboriginals. Thanks to Cook's murder in Hawaii and lurid stories of cannibalism by survivors of shipwrecks on the Reef, the newspaper-reading public was cheerfully predisposed to view the 1,400-mile length of the Reef and the Torres Strait as a death zone to sailors, as fatally treacherous to ships as the spear-throwing locals were to their crews.

What did most to change the Great Barrier Reef's image was the interest scientists began taking in corals. The English naturalist Joseph Beete Jukes was the first to resist the popular stereotyping of the Reef and its inhabitants. He and his friend 'Griffin' Melville supplied lyrical descriptions of corals that greatly helped R M Ballantyne in writing his boy's Robinsonade, The Coral Island (1858), especially since Ballantyne had never been nearer the South Pacific than Canada. The science remained contentious. When Charles Darwin published The Structure and Distribution of Coral Reefs in 1842, he was the first to propose that corals could only thrive in water shallow enough for daylight to reach them and that the great limestone reefs that supported them, often to immense depths, were simply layer upon layer of former corals that had died as the Earth's crust beneath them had sunk over millennia.

more here.