Shock and Awe: The neverending end times

Lewis H. Lapham in Lapham's Quarterly:

’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. —David Hume

ShockNor is it contrary to reason to prefer the sight of a raging inferno or restless typhoon to the view of a worm in one’s apple or a fly in the soup. The spectacle of disaster—real and imagined, past, present, and imminent—is blockbuster box office, its magnitude measured by the number of dead and square miles of devastation, the cost of property, rates of insurance, long-term consequences, short-form shock and awe.

…Behold the world for what it is, a raging of beasts and a writhing of serpents, and know that the war on terror will be with us until the end of our days. Get used to it; harden thy resolve; America is everywhere besieged by monsters that must be destroyed—by any and all means necessary, no matter how costly or barbaric. And yes, Virginia, there is an answer to Adam Smith’s disturbing question—to prevent a paltry misfortune to oneself not only is it possible, it’s also prudent to sacrifice as many of our fellow human beings as circumstances require. The UN Security Council in 1990 imposed harsh economic sanctions on Iraq in order to send a stern message to Saddam Hussein. When Madeleine Albright, then U.S. ambassador to the United Nations, was asked in an interview on 60 Minutes whether she had considered the resulting death of over 500,000 Iraqi children (of malnutrition and disease for lack of medicine and baby food), she said, “We think the price is worth it.” Together with an estimated $2 trillion, President George W. Bush sacrificed the lives of nearly 5,000 American soldiers and 165,000 Iraqi civilians to prevent America from being harmed by Saddam Hussein’s nonexistent weapons of mass destruction. The cost–benefit analysis emerged from the administration’s doctrine of forward deterrence and preemptive strike, a policy predicated on the notion that if any nation anywhere in the world presumed even to begin to think of challenging America’s supremacy (moral, military, cultural, and socioeconomic) America reserved the right to strangle the impudence at birth—to bomb the peasants or the palace, block the flows of oil or bank credit.

Michael Ledeen, foreign-policy adviser to the Bush White House and Freedom Scholar at the Foundation for Defense of Democracies, put the policy in its clearest perspective: “Every ten years or so, the United States needs to pick up some crappy little country and throw it against the wall, just to show the world we mean business.”

More here.

‘Shuffle Along’ and the Lost History of Black Performance in America

John Jeremiah Sullivan in The New York Times:

BlackThe blacks-­in-­blackface tradition, which lasted more than a century in this country, strikes most people, on first hearing of its existence, as deeply bizarre, and it was. But it emerged from a single crude reality: African-­American people were not allowed to perform onstage for much of the 19th century. They could not, that is, appear as themselves. The sight wasn’t tolerated by white audiences. There were anomalous instances, but as a rule, it didn’t happen. In front of the cabin, in the nursery, in a tavern, yes, white people might enjoy hearing them sing and seeing them dance, but the stage had power in it, and someone who appeared there couldn’t help partaking of that power, if only ever so slightly, momentarily. Part of it was the physical elevation. To be sitting below a black man or woman, looking up — that made many whites uncomfortable. But what those audiences would allow, would sit for — not easily at first, not without controversy and disdain, but gradually, and soon overwhelmingly — was the appearance of white men who had painted their faces to look black. That was an old custom of the stage, going back at least to “Othello.” They could live with that. And this created a space, a crack in the wall, through which blacks could enter, because blacks, too, could paint their faces. Blacks, too, could exist in this space that was neither-­nor. They could hide their blackness behind a darker blackness, a false one, a safe one. They wouldn’t be claiming power. By mocking themselves, their own race, they were giving it up. Except, never completely. There lay the charge. It was allowed, for actual black people to perform this way, starting around the 1840s — in a very few cases at first, and then increasingly — and there developed the genre, as it were, of blacks-­in-­blackface. A strange story, but this is a strange country.

Picture: Bert Williams having blackface applied in a production still from an uncompleted 1913 film that was identified in the MoMA archives in 2014.

More here.

Friday, March 25, 2016

The ethics of conspiracy theories

Patrick Stokes at ABC:

7251730-3x2-700x467Conspiracy theories weren't invented by the internet. They go back at least as far as the elite reaction to the French Revolution, with a grand Illuminati-Masonic conspiracy theory taking hold on both sides of the Atlantic before the start of the 19th century. Anti-Semitic conspiracy theories had tragic consequences during the last century, while today the Obama administration has had to contend with everything from demands for the president's birth certificate to state governments fuelling rumours of impending martial law. The consequences of conspiracy theories are, as they have always been, concrete and significant.

Most of us use the term 'conspiracy theory' to refer to beliefs we consider outlandish, paranoid, and almost certainly false. Yet strictly speaking this is unfair: on the simplest definition, a conspiracy theory is simply any explanation of observed events that posits two or more actors working in secret. Philosophers who have considered conspiracy theories as a class of explanation insist that there's nothing intrinsically irrational about conspiracy theory so defined. In fact, if we didn't accept the idea of a group of actors plotting in secret, we'd be unable to explain a host of historical events, from the assassination of Julius Caesar to Watergate. Conspiracies happen.

More here.

Is the cold fusion egg about to hatch?

Huw Price in Aeon:

ScreenHunter_1808 Mar. 25 20.05Three months ago I wrote an essay in Aeon about intriguing developments in low-energy nuclear reactions (LENR), a controversial field that traces its origins to the claims of ‘cold fusion’ by Martin Fleischmann and Stanley Pons in 1989. Cold fusion itself is widely regarded as discredited, yet there are several recent reports of LENR devices producing commercially useful amounts of heat. As David Bailey and Jonathan Borwein have pointed out in HuffPost Science, it seems increasingly improbable that all these findings are the result of fraud or error, as skeptics assert. But the only remaining alternative – that science simply made the wrong call when it dismissed cold fusion – is still almost invisible in serious scientific conversations and in the mainstream media.

Why is this possibility so broadly ignored? I suggested that it is because LENR is caught in what I called a ‘reputation trap’. Cold fusion has had such a bad name that scientists and journalists put their reputations at risk if they dare to express an interest in LENR. So a fascinating story goes largely unnoticed and unreported.

More importantly, the reputation trap has pushed to the margins an idea that, if it does work, might be just what we need right now: the ‘energy miracle’ that Bill Gates is so desperately seeking.

More here.

America’s Hands-On Hegelian

John Kaag in the Chronicle of Higher Education:

Photo_76003_landscape_650x433When the Republican presidential candidate Marco Rubio said that his country needed more welders and fewer philosophers, most listeners took him to be taking an anti-intellectual jab at academe, at the cultural and economic viability of high theory. Many of my philosophy colleagues came to the defense of our discipline, supplying statistics that demonstrated the economic payoff from pursuing the love of wisdom.

That response, however, sidesteps what might be most disturbing about Rubio’s comment: the suggestion that welders cannot be philosophers and philosophers cannot be welders. Theoreticians have always been mocked for being only loosely tethered to the world of human affairs (think of Aristophanes’ characterization, in The Clouds, of Socrates as hopelessly detached, flying solo in his balloon). And many philosophers, like Plato, have returned the favor by dismissing the rest of the world as cave dwellers. The advent of philosophy as an academic discipline in the 20th century has done nothing to ease this tension, and we have returned, once again, to the disturbing point where statesmen call for the end of theorizing.

More here.

how europe is slowly banning the american death penalty

Execution_of_Stanislaus_Lacroix_in_Hull_Quebec_Canada_1902William Watkin at 3:AM Magazine:

The rate of executions has been decreasing in America for a couple of years now. Some states have just stopped executing. Others such as Connecticut, Illinois, New Jersey, New Mexico, Maryland and New York have gone even further and taken the death penalty off the statute books. The result is that only a handful of states, die-hard vengeful places like Florida, Texas, Missouri, and Oklahoma, still doggedly pursue death by lethal injection. The reasons for this are not moral, legal or philosophical. Rather, the slow collapse of one of America’s most cherished and reviled institutions, death row, is due to primarily managerial reasons, specifically the meat and potatoes of any failing business model: supply of stock and training of staff.

There are two fundamental problems with the lethal injection in the US today. Either you can’t get the drugs or, if you can, you just can’t get them in the arms of the accused, without yourself being accused of breaking the eighth amendment, forbidding cruel and unusual punishment. How did we get to this pretty pass, committed as we have been to kill criminals whatever the cost?

Considering the promise of efficiency that is part and parcel of the medicalisation of the death penalty with the development of the lethal injection, it is surprising to note that, according to Austin Sarat, 7% of all lethal injections have been ‘botched’ as opposed to 3% of all other methods. One possible problem with the lethal injection is that to kill someone in this way you need expertise, training and of course the will to kill.

more here.

How to Read Dante in the 21st Century

Dante-hellJoseph Luzzi at The American Scholar:

Part of the problem lies in the difficulty that Dante poses for English translation. He wrote in an intensely idiomatic, rhyme-rich Tuscan with a surging terza rima meter that gives the poem its galloping energy—a unique rhythm that’s difficult to reproduce in rhyme-poor English separated from Dante’s local vernacular by centuries. The content of Dante’s writing presents an even bigger problem. Unlike the other author he supposedly shared the world with, Shakespeare, Dante was self-consciously scholarly and intellectual, filling his verses with allusions to ancient, biblical, and contemporary medieval writing, and tackling a range of theological, philosophical, political, and historical issues. And then there are all those characters! From Inferno 1 to Paradiso 33, scores of different literary personae—some real, some invented, some famous, some obscure—take the stage to plead their case or expound on their joy before the autobiographical character Dante as he journeys from hell to heaven. So in order to “get” Dante, a translator has to be both a poet and a scholar, attuned to the poet’s vertiginous literary experimentalism as well as his superhuman grasp of cultural and intellectual history. This is why one of the few truly successful English translations comes from Henry Wadsworth Longfellow, a professor of Italian at Harvard and an acclaimed poet. He produced one of the first complete, and in many respects still the best, English translations ofThe Divine Comedy in 1867. It did not hurt that Longfellow had also experienced the kind of traumatic loss—the death of his young wife after her dress caught fire—that brought him closer to the melancholy spirit of Dante’s writing, shaped by the lacerating exile from his beloved Florence in 1302. Longfellow succeeded in capturing the original brilliance of Dante’s lines with a close, sometimes awkwardly literal translation that allows the Tuscan to shine through the English, as though this “foreign” veneer were merely a protective layer added over the still-visible source. The critic Walter Benjamin wrote that a great translation calls our attention to a work’s original language even when we don’t speak that foreign tongue. Such extreme faithfulness can make the language of the translation feel unnatural—as though the source were shaping the translation into its own alien image. Longfellow’s English indeed comes across as Italianate: in surrendering to the letter and spirit of Dante’s Tuscan, he loses the quirks and perks of his mother tongue.

more here.

Joseph Brodsky, Darker and Brighter

Haven_Brodksy_imgCynthia Haven at The Nation:

Death dogged Brodsky—the tricky heart in his chest, the repeated and increasingly complex surgeries to eke out a few more years—until he finally succumbed at age 55. It was another fact that fueled the romantic myth, when the reality was far grimmer and harsher than his public imagined. Brodsky was terrified of death, despite his protestations, despite lighting up another cigarette almost as soon as the open-heart surgery was finished. Teasley writes that he “fought fear of death with poetry, love, sex, coffee and cigarettes—­and tried to deny death’s significance while almost never being able to forget it.”

Michael Gregory Stephens, writing in Ploughshares in 2008, described meeting Brodsky in a seedy old-man’s bar on a Sunday morning, when the poet was already belligerently drunk. I thought Stephens must be mistaken, so I contacted him a few years ago to ask. He told me he’d confirmed with others that it was not an isolated incident. However, it appears to have taken place sometime in the year-and-a-half period when Brodsky lost his father, his mother, and Carl Proffer, the man he once called “an incarnation of all the best things that humanity and being American represent.” Without the context provided by Teasley’s book, the episode would be preposterous and degrading, and difficult to reconcile with the aesthetic poet. Brodsky had achieved everything, succeeding beyond his wildest dreams—but he had lost the world in which he could savor those successes and the people dear to him who would have known what the victories had cost.

more here.

Harvard’s Eugenics Era

Adam S. Cohen in Harvard Magazine:

EugenicsIn August 1912, Harvard president emeritus Charles William Eliot addressed the Harvard Club of San Francisco on a subject close to his heart: racial purity. It was being threatened, he declared, by immigration. Eliot was not opposed to admitting new Americans, but he saw the mixture of racial groups it could bring about as a grave danger. “Each nation should keep its stock pure,” Eliot told his San Francisco audience. “There should be no blending of races.” Eliot’s warning against mixing races—which for him included Irish Catholics marrying white Anglo-Saxon Protestants, Jews marrying Gentiles, and blacks marrying whites—was a central tenet of eugenics. The eugenics movement, which had begun in England and was rapidly spreading in the United States, insisted that human progress depended on promoting reproduction by the best people in the best combinations, and preventing the unworthy from having children.

…Oliver Wendell Holmes Sr.—A.B. 1829, M.D. ’36, LL.D. ’80, dean of Harvard Medical School, acclaimed writer, and father of the future Supreme Court justice—was one of the first American intellectuals to espouse eugenics. Holmes, whose ancestors had been at Harvard since John Oliver entered with the class of 1680, had been writing about human breeding even before Galton. He had coined the phrase “Boston Brahmin” in an 1861 book in which he described his social class as a physical and mental elite, identifiable by its noble “physiognomy” and “aptitude for learning,” which he insisted were “congenital and hereditary.” Holmes believed eugenic principles could be used to address the nation’s social problems. In an 1875 article in The Atlantic Monthly, he gave Galton an early embrace, and argued that his ideas could help to explain the roots of criminal behavior. “If genius and talent are inherited, as Mr. Galton has so conclusively shown,” Holmes wrote, “why should not deep-rooted moral defects…show themselves…in the descendants of moral monsters?” As eugenics grew in popularity, it took hold at the highest levels of Harvard. A. Lawrence Lowell, who served as president from 1909 to 1933, was an active supporter. Lowell, who worked to impose a quota on Jewish students and to keep black students from living in the Yard, was particularly concerned about immigration—and he joined the eugenicists in calling for sharp limits. “The need for homogeneity in a democracy,” he insisted, justified laws “resisting the influx of great numbers of a greatly different race.”

More here.

Why do our cell’s power plants have their own DNA?

Laurel Hamers in Science:

MitochondrialIt’s one of the big mysteries of cell biology. Why do mitochondria—the oval-shaped structures that power our cells—have their own DNA, and why have they kept it when the cell itself has plenty of its own genetic material? A new study may have found an answer. Scientists think that mitochondria were once independent single-celled organisms until, more than a billion years ago, they were swallowed by larger cells. Instead of being digested, they settled down and developed a mutually beneficial relationship developed with their hosts that eventually enabled the rise of more complex life, like today’s plants and animals. Over the years, the mitochondrial genome has shrunk. The nucleus now harbors the vast majority of the cell’s genetic material—even genes that help the mitochondria function. In humans, for instance, the mitochondrial genome contains just 37 genes, versus the nucleus’s 20,000-plus. Over time, most mitochondrial genes have jumped into the nucleus. But if those genes are mobile, why have mitochondria retained any genes at all, especially considering that mutations in some of those genes can cause rare but crippling diseases that gradually destroy patients’ brains, livers, hearts, and other key organs. Scientists have tossed around some ideas, but there haven't been hard data to pick one over another.

…Mitochondria make energy through a series of chemical reactions that pass electrons along a membrane. Key to this process is a series of protein complexes, large protein globs that embed in the internal membrane of the mitochondria. All of the mitochondria’s remaining genes help produce energy in some way. But the team found that a gene was more likely to stick around if it created a protein that was central to one of these complexes. Genes responsible for more peripheral energy-producing functions, meanwhile, were more likely to be outsourced to the nucleus, the group reports today in Cell Systems. “Keeping those genes locally in the mitochondria gives the cell a way to individually control mitochondria,” Johnston says, because pivotal proteins are created in the mitochondria themselves. That local control means the cell can more quickly and efficiently regulate energy production moment-to-moment in individual mitochondria, instead of having to make sweeping changes to the hundreds or thousands of mitochondria it contains. For instance, out-of-whack mitochondrion can be fixed individually rather than triggering a blanket, cell-wide response that might then throw something else off balance.

More here.

Thursday, March 24, 2016

In Newly Created Life-Form, a Major Mystery

Emily Singer in Quanta:

ScreenHunter_1805 Mar. 24 21.32Peel away the layers of a house — the plastered walls, the slate roof, the hardwood floors — and you’re left with a frame, the skeletal form that makes up the core of any structure. Can we do the same with life? Can scientists pare down the layers of complexity to reveal the essence of life, the foundation on which biology is built?

That’s what Craig Venter and his collaborators have attempted to do in a new study published today in the journal Science. Venter’s team painstakingly whittled down the genome of Mycoplasma mycoides, a bacterium that lives in cattle, to reveal a bare-bones set of genetic instructions capable of making life. The result is a tiny organism named syn3.0 that contains just 473 genes. (By comparison, E. coli has about 4,000 to 5,000 genes, and humans have roughly 20,000.)

Yet within those 473 genes lies a gaping hole. Scientists have little idea what roughly a third of them do. Rather than illuminating the essential components of life, syn3.0 has revealed how much we have left to learn about the very basics of biology.

“To me, the most interesting thing is what it tells us about what we don’t know,” said Jack Szostak, a biochemist at Harvard University who was not involved in the study. “So many genes of unknown function seem to be essential.”

More here.

Can Ideas Withstand Shifts in Language?

Elisa Gabbert in Guernica:

11374853816_9f8d3b4a59_k-e1457672711841There’s an idea in linguistics that until a culture creates a name for a color, they don’t really see it as a distinct category. It builds from the anthropological discovery that languages tend to develop color terms in the same order: first, for black and white (or roughly, light and dark), then for red, then for either green or yellow and then both, then blue (and so on). They don’t invent a word for blue, the thinking goes, much less for mauve or taupe, until they need it. Color terms proliferate in a world of dyes and spectrometry.

This has led some linguists and scholars to suggest that Homer’s “wine-dark sea” was not just a weird poeticism but evidence of the ancient Greeks’ entirely different perception of color: “As late as the fourth century BC, Plato named the four primary colors as white, black, red, and bright” (via Lapham’s Quarterly). It’s possible that our ancestors did not think of the ocean as blue; it certainly doesn’t look as blue from a boat as it does from a plane.

It’s a chicken-or-egg conundrum though: How can the name come after the concept if you need the name to understand the concept? This problem of circularity always made me resistant to the Sapir-Whorf hypothesis in its strong version, which states that our thoughts are bound by the restraints of our language. The weak version, that language merely affects our thoughts, seems trivially true. What doesn’t affect our thoughts?

More here.

Book review of A Numerate Life by John Allen Paulos

Dilip D'Souza in LiveMint:

ScreenHunter_1804 Mar. 24 14.57You might pick the book up anticipating an autobiography, but A Numerate Life is not one in the sense you probably understand that word. Not least because Paulos nurses a “scepticism about the biographical enterprise” and that’s partly why this book’s “progression will be episodic and non-linear”. But through it all, he “hopes to show that the points of correspondence between mathematics and biography are … quite profound”.

And it’s in that spirit that Paulos takes us on a vivid, thought-provoking, free-spirited tour of his life, that mathematical lens firmly in place. Over here it’s his love interest, her shattered Coke bottle taking him, yes, two-thirds of the way. There it’s someone he knew who was “completely conventional, totally banal and utterly unimaginative”—except that Paulos finds out that he is given to “gluing five-dollar bills to the sidewalk (and) giggling at people trying to scrape them off”. Paulos uses this intriguing person as a springboard to show—mathematically, of course—how most of us are best characterized as strange, not normal. In doing so, he raises the question of what, indeed, is “normal”.

More here.

current debates about aa and science

AAMeghan O'Gieblyn at The Point:

Alcoholics Anonymous is notoriously difficult to evaluate scientifically. Several observational studies have been quite favorable to the programfinding, for instance, that the longer people attend twelve-step meetings, the more likely they are to achieve long-term sobriety, or that engagement in meetings, as opposed to mere attendance, can be correlated with sobriety. But for many such studies are innately compromised by the fact that their members self-select. In The Sober Truth, Lance Dodes dismisses the observational studies wholesale. The kinds of people who go to AAmoreover, the ones who stick aroundare those who find it useful. What about everyone else? To really understand the effectiveness of AA, Dodes suggests, we must consider everyone who walks into the rooms, including those reluctant attendees who sulk into the back rows of speaker meetings, nod off during the Serenity Prayer and never return. AA’s literature claims that those who fail to fully participate in the twelve steps tend to relapse, but for Dodes such warnings are little more than community propaganda, a way of blaming the participant when the program fails them. “Imagine if similar claims were made in defense of an ineffective antibiotic,” he writes.

As the comparison makes clear, Dodes conceives of AA as a “treatment” for alcoholism, a term that assumes patient passivity and is at odds with how members often describe the programas a spiritual discipline that requires its participants to engage in a series of actions and rituals. Yet it is the discussion of attendance versus participation that lays the groundwork for Dodes’s conclusion about AA’s inefficacy. Citing data from the NIAAA that claims up to 31 percent of people who go to AA stick around for a year or more, Dodes then modifies those numbers to reflect attendance rather than involvement.

more here.

what is a robot, really?

1920Adrienne Lafrance at The Atlantic:

The year is 2016. Robots have infiltrated the human world. We built them, one by one, and now they are all around us. Soon there will be many more of them, working alone and in swarms. One is no larger than a single grain of rice, while another is larger than a prairie barn. These machines can be angular, flat, tubby, spindly, bulbous, and gangly. Not all of them have faces. Not all of them have bodies.

And yet they can do things once thought impossible for machine. They vacuum carpets, zip up winter coats, paint cars, organize warehouses, mix drinks, play beer pong, waltz across a school gymnasium, limp like wounded animals, write and publish stories, replicate abstract expressionist art, clean up nuclear waste, even dream.

Except, wait. Are these all really robots? What is a robot, anyway?

This has become an increasingly difficult question to answer. Yet it’s a crucial one. Ubiquitous computing and automation are occurring in tandem. Self-operating machines are permeating every dimension of society, so that humans find themselves interacting more frequently with robots than ever before—often without even realizing it. The human-machine relationship is rapidly evolving as a result. Humanity, and what it means to be a human, will be defined in part by the machines people design.

more here.

Israel: The Broken Silence

Shulman_1-040716David Shulman at the NYRB:

Israeli human rights activists and what is left of the Israeli peace groups, including joint Israeli-Palestinian peace organizations, are under attack. In a sense, this is nothing very new; organizations such as B’Tselem, the most prominent and effective in the area of human rights, and Breaking the Silence, which specializes in soldiers’ firsthand testimony about what they have seen and done in the occupied territories and in Gaza, have always been anathema to the Israeli right, which regards them as treasonous.1 But open attacks on the Israeli left have now assumed a far more sinister and ruthless character; some of them are being played out in the interrogation rooms of Israeli prisons. Clearly, there is an ongoing coordinated campaign involving the government, members of the Knesset, the police, various semiautonomous right-wing groups, and the public media. Politically driven harassment, including violent and illegal arrest, interrogation, denial of legal support, virulent incitement, smear campaigns, even death threats issued by proxy—all this has become part of the repertoire of the far right, which dominates the present government and sets the tone for its policies.

There is now a palpable sense of danger, and also an accelerating decline into a situation of incipient everyday state terror. Palestinians have lived with the reality of state terror for decades—it is the very stuff of the occupation—but it has now seeped into the texture of life inside the Green Line, as many on the left have warned that it would. Israelis with a memory going back to the 1960s sometimes liken the current campaign to the violent actions of the extreme right in Greece before the colonels took power, as famously depicted in the still-canonical film Z.

more here.