The Two Faces of Narcissism: Admiration Seeking and Rivalry

Matthew Hutson in Scientific American:

NarcissismIn the past two years the study of narcissism has gotten a face-lift. The trait is now considered to have two distinct dimensions: admiration seeking and rivalry. Subsequent studies, including a recent look at actors, revealed a more nuanced picture of personality than did past work. The actors, for instance, want admiration more than most people but tend to be less competitive than the average Joe—they may crave the spotlight, but they will not necessarily push others out of the way to get it.

The new understanding of narcissism started with a 2013 paper in the Journal of Personality and Social Psychology that identified narcissism's two dimensions. “Previous theories and measures of narcissism dealt with this trait as a unitary construct, mixing up agentic aspects—assertiveness, dominance, charm—with antagonistic aspects—aggressiveness and devaluation of others,” says Mitja Back of the University of Münster in Germany, the study's primary author. Lumping both aspects together made narcissistic behavior confounding. Studying hundreds of healthy subjects, Back's team found that traits related to narcissism clustered into two categories, with both facets of narcissism serving to maintain a positive self-image. Self-promotion draws praise, whereas self-defense demeans others to fend off criticism. Admiration seeking and rivalry each have different effects on body language, relationship health and personality.

More here.

Thursday, May 14, 2015

Affairs of State

5._aa346877

William Rosen in Lapham's Quarterly (image Gabrielle D’Estrées and One of Her Sisters, c. 1594. Louvre Museum, Paris, France.):

On Sunday, January 18, 532, the sun rose fairly late over Constantinople—had one been available, a modern watch would have read 7:29 a.m.—but no one in the world’s most populous city saw it through the smoke. The city, which had been the center of the Roman Empire ever since Constantine had founded it as his eponymous capital two centuries before, was on fire. Five days earlier, what had begun in the city’s chariot-racing stadium as a protest against two planned hangings had turned into the biggest civil insurrection in the empire’s history. Tens of thousands of rioters had vandalized and burned the imperial senate, the city’s most important church, the enormous public Baths of Zeuxippus, and dozens of other buildings. By Sunday, the riot had escalated into a rebellion. The insurgents demanded the abdication of the current emperor, whose own bodyguards had him and his empress besieged in their own palace.

The trapped ruler had traveled a long and indirect route to the pinnacle of the empire founded by Augustus six centuries before. Christened Flavius Petrus Sabbatius in the last years of the fifth century, he had come to Constantinople as the protégé of his uncle, a successful officer in the palace guard. There, he took advantage of the capital’s substantial educational opportunities and his own even more substantial intelligence to place his uncle on the imperial throne, and to succeed him, under the name he gave himself: Justinian.

He was already first in line for the crown when he met his future empress, Theodora, whose path to the palace was even more improbable than Justinian’s own. His father was a Balkan peasant; hers was an animal trainer for Constantinople’s circuses. He had spent his early years as a student of theological dogma and an undistinguished soldier. She had been a high-priced prostitute and Constantinople’s most famous erotic actress: a petite, wavy-haired beauty with enormous dark eyes and a stage act that was three parts Mae West, two parts Jenna Jameson.

More here.

Reviving the Female Canon

Lead_960

Susan Price in The Atlantic:

In his first work, published in 1747, Immanuel Kant cites the ideas of another philosopher: a scholar of Newton, religion, science, and mathematics. The philosopher, whose work had been translated into several languages, is Émilie Du Châtelet.

Yet despite her powerhouse accomplishments—and the shout-out from no less a luminary than Kant—her work won’t be found in the 1,000-plus pages of the new edition of The Norton Introduction to Philosophy. In the anthology, which claims to trace 2,400 years of philosophy, the first female philosopher doesn’t appear until the section on writing from the mid-20th century. Or in any of the other leading anthologies used in university classrooms, scholars say.

Also absent are these 17th-century English thinkers: Margaret Cavendish, a prolific writer and natural philosopher; Anne Conway, who discusses the philosophy of Descartes, Hobbes, and Spinoza in The Principles of the Most Ancient and Modern Philosophy (which is influenced by the Kabbalah); and “Lady” Damaris Masham—the daughter of a Cambridge Platonist and a close friend of John Locke who published several works and debated ideas in letters she exchanged with the German mathematician and philosopher G.W. Leibniz.

Despite the spread of feminism and multiculturalism, and their impact on fields from literature to anthropology, it is possible to major in philosophy without hearing anything about the historical contributions of women philosophers. The canon remains dominated by white males—the discipline that some say still hews to the myth that genius is tied to gender.

More here.

Why Labour Shouldn’t Rush Back to the Right

Image-20150513-2479-fdarga

Jonathan Hopkin in The Conversation (image EPA/Richard Lewis):

Ed Miliband’s failure to win the 2015 election, or even to increase Labour’s share of seats, has been seized upon by the Blairite wing of the party to push its own centrist agenda. Peter Mandelson, one of the architects of New Labour, accused Miliband of making a “terrible mistake” in abandoning Blair’s focus on aspirational John Lewis voters.

Chuka Ummuna, who is running for party leadership, claimedthat Labour was punished by voters for running a budget deficit before the financial crisis. Tony Blair weighed in too, claiming that a more “left-wing” or “Scottish” approach would not help win back the voters lost north of the border.

There is no doubt that Labour’s failure to win over enough voters in Middle England marginal constituencies cost it the election, and it is equally true that Tony Blair’s New Labour project was successful in this regard in the 1990s and 2000s. But a similar centrist strategy would not work again for Labour. Another look at the reasons for Labour’s defeat shows why.

There is strong evidence that Labour is still carrying the burden of being in office when the financial crisis struck, fatally damaging its hard won reputation for economic competence. IPSOS Mori data shows that Labour held a substantial advantage over the Conservatives on economic policy throughout its period of government until 2008.

This advantage was lost when the financial crisis began, and has not yet been recovered: coming into the election, the Conservatives led by 41% to 23% when voters were asked which party had the best policies for the economy. Labour took the blame for the crisis, just in the same way the Conservatives lost their reputation for economic competence on Black Wednesday in 1992, when the pound was ejected from the European Exchange Rate Mechanism. In key constituencies, the perceived risks of a Labour government to economic stability undoubtedly shored up the Conservative vote.

More here.

Fertility fog

Amy Klein in Aeon:

FertilityFor a woman over 42, there’s only a 3.9 per cent chance that a live birth will result from an IVF cycle using her own, fresh eggs, according to the American Society of Reproductive Medicine (ASRM). A woman over 44 has just a 1.8 per cent chance of a live birth under the same scenario, according to the US National Center for Chronic Disease Prevention and Health Promotion. Women using fresh donor eggs have about a 56.6 per cent chance of success per round for all ages. Indeed, according to research from the Fertility Authority in New York, 51 per cent of women aged between 35 and 40 wait a year or more before consulting a specialist, in hopes of conceiving naturally first. ‘It’s ironic, considering that the wait of two years will coincide with diminished fertility,’ the group says. Stories of celebrities and other older women having babies have led to misunderstanding precisely because they fly in the face of long-held beliefs: for many years, all women heard was that fertility takes a dive at 30-35. But when you see plenty of people having no trouble having babies until 40, you think it’s just a scare tactic. That’s what happened to me, anyway. Raised in a traditional Jewish, family-oriented community that emphasises early marriage and plenty of children, I was constantly warned about waiting too long to get married and have kids. But when I left that community, I saw so many women who had kids at 37, 38, that I thought it was all bubbe meises, old wives’ tales.

It’s not. There is a declining rate of fertility strongly tied to age – but the exact numbers have recently come up for debate. In a piece for The Atlantic in 2013 headlined ‘How Long Can you Wait to Have a Baby’, the psychologist Jean Twenge showed that much of the research cited by articles and studies (such as how one in three women aged 35-39 will not be pregnant after a year of trying) was based on ancient figures, such as French birth records from 1670 to 1830. She also cites a 2004 Obstetrics and Gynecology study examining chances of pregnancy among 782 European women. With sex at least twice a week, the study found, 82 per cent of 35-to-39-year-old women conceived within a year, compared with 86 per cent of 27-to-34-year-olds.

Moe here.

How long should a woman wait to freeze her eggs?

Emily DeMarco in Science:

EggIn 2012, the American Society for Reproductive Medicine announced that it was no longer “experimental” for a woman to freeze her eggs simply because she wanted to wait to have a child. Since then, demand for the procedure has skyrocketed, even though its costs remain high. Now, scientists say they have figured out—taking economic and biological considerations into account—the best age for women to freeze their eggs if they want to get pregnant as late in life as possible. Two main factors determine when it’s best to freeze human eggs: how viable those eggs will be when thawed and how much it’s going to cost. The longer a woman waits to freeze her eggs, the less likely they are to result in a live birth. Yet, older women get more bang for their buck by freezing their eggs, because the procedure benefits them most; young women potentially waste money freezing their eggs because they’ll still likely be quite fertile a few years down the line.

…Fertility rates, however, are most strongly influenced by the age of a woman’s eggs. As a result, the nearly 52% probability of a successful pregnancy using those eggs frozen in time at age 37 is lower than what many women and their doctors consider acceptable. According to the study, most women would have to freeze their eggs by age 34 to have at least a 70% chance of live birth. Considering the researchers found that egg freezing provided the most improvement in live birth rates over not freezing after age 30, “we feel the sweet spot for those electing to freeze is age 31 to 33,” says co-author Tolga Mesen, an obstetrician-gynecologist at the Fertility Clinic at the University of North Carolina, Chapel Hill.

More here.

The Trials of Hannah Arendt

Robin_trialshannaharendt_ba_img

Corey Robin in The Nation (Photo via AP):

Hannah Arendt’s five articles on the 1961 trial of Adolf Eichmann by the state of Israel appeared in The New Yorker in February and March 1963. They were published as Eichmann in Jerusalem: A Report on the Banality of Evillater that year. The book immediately set off a controversy that a half-century later shows no signs of abating. Just this past fall, the intellectual historian Richard Wolin (a colleague of mine at the CUNY Graduate Center) and the Yale political theorist Seyla Benhabib fought bitterly over Eichmann in the pages of The New York Times and the Jewish Review of Books. The book has become the event, eclipsing the trial itself.

The Eichmann fires are always smoldering, but what reignited them last fall was the appearance in English of Bettina Stangneth’s Eichmann Before Jerusalem, first published in Germany in 2011. Eichmann Before Jerusalem aims to reveal a depth of anti-Semitism in Eichmann that Arendt never quite grasped. Stangneth bases her argument on the so-called Sassen transcripts, a voluminous record of conversations between Eichmann and a group of unreconstructed Nazis in Argentina in the 1950s (only a portion of the transcripts were available to Arendt, who read and discussed them in Eichmann). Yet Stangneth’s is merely the latest in a series of books—including Deborah Lipstadt’s The Eichmann Trial, published in 2011, and David Cesarani’s Becoming Eichmann: Rethinking the Life, Crimes, and Trial of a “Desk Murderer,” which appeared in 2004—arguing that Eichmann was more of an anti-Semite than Arendt had realized.

There’s a history to the conflict over Eichmann in Jerusalem, and like all such histories, the changes in how we read and argue about the book tell us as much about ourselves, and our shifting preoccupations and politics, as they do about Eichmann or Arendt. What has remained constant, however, is the wrath and the rage that Eichmann has aroused. Other books are read, reviled, cast off, passed on. Eichmann is different. Its errors and flaws, real and imagined, have not consigned it to the dustbin of history; they are perennially retrieved and held up as evidence of the book’s viciousness and its author’s vice. An “evil book,” the Anti-Defamation League said upon its publication, and so it remains. Friends and enemies, defenders and detractors—all have compared Arendt and her book to a criminal in the dock, her critics to prosecutors set on conviction.

Like so many Jewish texts throughout the ages, Eichmann in Jerusalem is an invitation to an auto-da-fé. Only in this case, almost all of the inquisitors are Jews.

More here.

Wednesday, May 13, 2015

The Data That Threatened to Break Physics

6045_f449d27f42a9b2a25b247ac15989090f

Ransom Stephens in Nautilus (Alberto Pizzoli/AFP/Getty Images):

Antonio Ereditato insists that our interview be carried out through Skype with both cameras on. Just the other side of middle age, his salt-and-pepper hair frames wide open eyes and a chiseled chin. He smiles easily and his gaze captures your attention like a spotlight. An Italian accent adds extra vowels to the end of his words.

We talk for 15 minutes before he agrees to an on-the-record interview. He tells me he has no desire to engage journalists who might subvert his words into a sensational, insincere story. The reason he agreed to Skype with me is because I am not a journalist, but a physicist and writer who spent 13 years in the trenches of experimental particle physics. And he has no tolerance for entering another debate about behavior rather than science. But finally, he says, “Okay. I’ve looked in your eyes. I trust you. Maybe that is my problem. Maybe I trust too easily, but I trust you.” He laughs and leans back in his chair with his arms out and open.

Ereditato is the former leader of the 160 physicists from 13 countries that compose the OPERA collaboration, whose goal is to study neutrino physics. It was first proposed in 2000, and Ereditato led it from 2008 to 2012. Then in late winter of 2011, the impossible seemed to happen. “The guy who is looking at the data calls me,” Ereditato tells me from my computer screen. “He says, ‘I see something strange.’ ” What he saw was evidence that neutrinos traveled through 454 miles of Earth’s crust, from Switzerland to Italy—which they are supposed to do—at such a high speed that they arrived 60.7 nanoseconds faster than light could travel that distance in outer space—which should have been impossible.

Over the last century, Einstein’s observation that no massive object can travel faster than the speed of light in a vacuum, enshrined in his theory of special relativity, has become a keystone of how we understand the universe. If the OPERA measurement was correct, it would mark the first-ever violation of that theory: An atom bomb in the heart of our understanding of the universe.

More here.

Making It Explicit in Israel

11stoneWeb-blog480

Anat Biletzki in the NYT's The Stone:

Twenty years ago, the philosopher Robert Brandom, in his momentous book, “Making It Explicit,” presented us with a new way of looking at language and meaning. Using the work of a number of philosophers — from Kant and Hegel, to 20th-century thinkers like Ludwig Wittgenstein, Gottlob Frege, W.V. Quine, Michael Dummett and many others — he showed us how to move the fulcrum of our attention from representation to inference, from the molecular to the holistic, from the individual to the social, and from the factual-descriptive to the normative. In short: Brandom explained that it is through social, communal norms that we give meaning to our words.

According to Brandom, we, as rational beings looking for reasons, make assertions that commit us to the connections (through inference) between the things we say, yet this is actually part of a game of making explicit what is already there, in our social, moral and political norms.

In Israel, the unambiguous move to explicitness began in July 2014 — more exactly between July 8 and August 27 of last year — when Israel engaged in the military operation called Tsuk Eitan. That means “Firm Cliff,” not “Protective Edge,” as translated by the army spokesperson, a phrase expressing implicit defensiveness. The precise goals of the operation were never clearly articulated, moving from the reported objective of stopping Hamas rockets from falling on Israeli territory to that of destroying the tunnels to weakening Hamas to returning quiet and security for Israeli communities in the South to achieving a responsible, militarily weakened sovereign in Gaza.

More here.

Foundations for Moral Relativism

Velleman

Antti Kauppinen reviews J. David Velleman, Foundations for Moral Relativism, in Notre Dame Philosophical Reviews:

It comes as no surprise that David Velleman's brief but dense new book is original, provocative, erudite, and seductive. Drawing on a characteristically broad range of non-philosophical sources — such as game studies, anthropology, and ethnomethodology — he presents novel arguments in defense of moral relativism. In this review, I will examine some of his central arguments.

What is moral relativism? It is not the view that different things are morally right or wrong in different circumstances. Non-relativists agree that whether it is wrong to let a child play alone in the park or dance at a funeral depends on whether there is a risk of significant harm or whether the behavior is disrespectful in context. What they insist on is that context-dependent truths about right and wrong can be derived from the conjunction of non-moral facts of the situation and basic moral principles that are universal in the sense that they apply to everyone regardless of their moral or other beliefs or community membership. This is what relativism denies. Positively, relativism says that there are a variety of communities whose norms are genuinely authoritative for their members.

One way to put the relativist claim is semantic. Velleman says that “it makes no sense to ask whether an action or practice is wrong simpliciter” (45), any more than it makes sense to ask whether someone is tall simpliciter. Just like someone can only be tall-for-X, something can only be wrong-for-members-of-X (or perhaps wrong-by-the-standards-of-X). In each case, the variable may be left implicit to be supplied by the context. Famously, such views have difficulty accommodating the intuition that it's possible for people in different communities (or just people subscribing to different moral standards) to disagree with each other without linguistic confusion. The Catholic from Peru who says that abortion is always wrong and the atheist from Sweden who says it is not always wrong appear to hold conflicting views on the morality of abortion, rather than just making claims about what their own standards or communal norms prohibit or allow.

I suspect Velleman would say that such disagreement is only genuine insofar as the parties are members of the same community (in spite of their differences). Otherwise, they will be speaking past each other after all — their only disagreement can be in attitude, in what to do. (Velleman has no patience for recently trendy forms of relativism, according to which it is possible for people to disagree faultlessly.) Were they to confusedly maintain that abortion is wrong or not wrong simpliciter, they would be mistaken. Why? According to Velleman, the main argument against universalism is simple: there are communities with different moral norms, and “no one has ever succeeded in showing any one set of norms to be universally valid” (45).

More here.

What Caused Capitalism? Assessing the Roles of the West and the Rest

Jeremy Adelman in Foreign Affairs:

ScreenHunter_1194 May. 13 19.53Once upon a time, smart people thought the world was flat. As globalization took off, economists pointed to spreading market forces that allowed consumers to buy similar things for the same prices around the world. Others invoked the expansion of liberalism and democracy after the Cold War. For a while, it seemed as if the West’s political and economic ways really had won out.

But the euphoric days of flat talk now seem like a bygone era, replaced by gloom and anxiety. The economic shock of 2008, the United States’ political paralysis, Europe’s financial quagmires, the dashed dreams of the Arab Spring, and the specter of competition from illiberal capitalist countries such as China have doused enthusiasm about the West’s destiny. Once seen as a model for “the rest,” the West is now in question. Even the erstwhile booster Francis Fukuyama has seen the dark, warning in his recent two-volume history of political order that the future may not lie with the places that brought the world liberalism and democracy in the past. Recent bestsellers, such as Daron Acemoglu and James Robinson’s Why Nations Failand Thomas Piketty’s Capital in the Twenty-first Century, capture the pessimistic Zeitgeist. So does a map produced in 2012 by the McKinsey Global Institute, which plots the movement of the world’s economic center of gravity out of China in the year 1, barely reaching Greenland by 1950 (the closest it ever got to New York), and now veering back to where it began.

It was only a matter of time before this Sturm und Drang affected the genteel world of historians. Since the future seems up for grabs, so is the past. Chances are, if a historian’s narrative of the European miracle and the rise of capitalism is upbeat, the prognosis for the West will be good, whereas if the tale is not so triumphal, the forecast will be more ominous. A recent spate of books about the history of global capitalism gives readers the spectrum.

More here.

The next step in saving the planet: E O Wilson and Sean Carroll in conversation

From Mosaic:

ScreenHunter_1193 May. 13 15.54Sean: At what point did you know enough as a scientist, or had travelled enough, that you perceived a threat to nature?

Ed: I knew it when I started going into the tropics in the early 1950s, but it’s the sort of thing you see and you don’t grasp at first. I saw ruined environments in Mexico and parts of the South Pacific, and I used to say, “Oh well, they messed that one up. That makes it a lot harder to go to the rainforest; I have to go way over the mountain range.”

We only began to put the big picture together in the 1970s and 1980s, which allowed us to think in terms of what could be preserved and how we might be able to do it.

Sean: You’ve looked at this picture globally – you’re far more experienced than almost any biologist in this – and looked at how large a task this is. Let me make sure I have an understanding of where we would start. Would we start with habitat protection? Is the first job, before we lose anything else, to protect the ecosystem?

Ed: Absolutely.

Sean: And that’s something people can do.

Ed: Absolutely. That’s what the best global conservation organisations and our government (and other environmentally inclined governments, such as Sweden and the Netherlands) are doing: protecting the remaining wild environment. This is the equivalent of getting a patient to the emergency room – keep them alive and then figure out how to save them.

The global conservation organisations are doing everything they can on modest budgets. They essentially promote setting aside reserves and parks around the world. Recently, in the book Half Earth (due out in March 2016), I’ve made the case for global reserves that collectively cover half the surface of Earth’s land and sea.

More here.

Listening in on the nuclear underground

From the Bulletin of the Atomic Scientists:

A global network of seismic and infrasound monitoring stations listens constantly for underground clues that a nuclear test has taken place. Set up by the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission, the stations will be part of the verification system for a comprehensive test ban treaty, should it come into force. The United States signed the treaty in 1996, but in 1999, the US Congress declined to ratify it. Since then, efforts to bring the treaty into force have stalled. Just the same, most countries have observed it, and the monitoring system is widely credited with being able to identify any nuclear tests that are conducted. This video, produced by the CTBTO, uses a monitoring station in Bischofsreut, a tranquil corner of Germany's Bavarian Forest, to explain how the global nuclear detection system works.

Recognition: Build a reputation

Chris Woolston in Nature:

CareerLess than a decade after receiving her undergraduate degree in biology, Holly Bik has transformed herself. When she started her PhD, she was as an aspiring marine biologist with a deep interest in nematode worms. Today, she is a highly regarded interdisciplinary computational and evolutionary biologist who travels the world to give talks on topics that range from use of social media to what she dubs 'ecophylometamicrobiomics' — the identification of eukaryotic microbes in the environment through sequencing. Now at the University of Birmingham, UK, she has led the development of the data-visualization platform Phinch and is actively involved in three working groups tackling issues as diverse as the evolution of indoor microbial communities and the biodiversity of the deep sea.

It is all a big leap from worms. How did she become such a sought-after figure in the science community? The key to property is said to be location, location, location; in science, it's all about reputation, reputation, reputation. “I'm trying to cultivate a reputation as an interdisciplinary researcher,” says Bik. “Marine biology, computer programming, genomics — I want people to think of me as a potential collaborator.” If science were truly a double-blind enterprise, generic researchers X, Y and Z would compete for citations, grants, invited talks and promotions solely on the basis of their accomplishments and aptitude. In the real world, scientists have names, and those names come with baggage, both positive and negative. In an increasingly competitive scientific environment, a reputation may matter more than ever, says Philip Bourne, associate director for data science at the US National Institutes of Health (NIH) in Bethesda, Maryland. “The degree of separation between any two scientists is relatively small,” Bourne says. “If you're colossally brilliant, you can be a jerk and still have a good reputation. But if you're a mere mortal, the way you treat science and the people around you will come back on you.”

More here.

Wednesday Poem

My Grandparents' Generation
.

They are taking so many things with them:
their sewing machines and fine china,

their ability to fold a newspaper
with one hand and swat a fly.

They are taking their rotary telephones,
and fat televisions, and knitting needles,

their cast iron frying pans, and Tupperware.
They are packing away the picnics

and perambulators, the wagons
and church socials. They are wrapped in

lipstick and big band music, dressed
in recipes. Buried with them: bathtubs

with feet, front porches, dogs without leashes.
These are the people who raised me

and now I am left behind in
a world without paper letters,

a place where the phone
has grown as eager as a weed.

I am going to miss their attics,
their ordinary coffee, their chicken

fried in lard. I would give anything
to be ten again, up late with them

in that cottage by the river, buying
Marvin Gardens and passing go,

collecting two hundred dollars.
.

by Faith Shearin
from Telling the Bees
Austin State University Press, 2015

Tuesday, May 12, 2015

Making Shit Up

Justin E. H. Smith in his own blog:

6a00d83453bcda69e201b7c7886150970b-400wiNabokov said its humor did not age well, and unlike Moby-Dick, which is occasionally dismissed as a school-boy's adventure story but never as hokey or stale, The Ingenious Gentleman Don Quixote de la Mancha seems to suffer under the weight of its most representative scenes. The association of the whole with these mere parts is either too vivid, or it is not vivid at all, as in the case of the subnovel of Anselmo and Lothario, which everyone today knows, without knowing where it is from. Most of these scenes are played out in Part I, by the end of which the presumed hero has survived several battles against hallucinated enemies, drawn his squire hesitantly but hopefully into all of them, and mingled with several different minor characters, many of whose own stories, and not just Lothario's, amount to novels within the novel. He has been tricked into a cage by a sympathetic pair, a canon and a priest, and taken back to his home, to his housekeeper and his niece, in the hope that he might be cured of his madness.

Part I was published first in Madrid in 1605, and over the next ten years would be published in Brussels (1607), Milan (1610), and, in the first of many English translations, in London in 1612. Part II would be published ten years after Part One, also in Madrid, in 1615. Although Don Quixote is so often reduced to the battle with the windmills, which has been concluded within the first few chapters of Part One (leading us to suspect that its iconic character has at least something to do with the fact that many readers get no further), it is Part II, and what happens or is imagined to have happened between 1610 and 1615, that is the true clavis to understanding the novel in its entirety, and in all its philosophical, subversive, deceitful greatness.

More here.

Is the promotion of violence inherent to any religion?

David Nirenberg in The Nation:

51meKj7snsL._SY344_BO1,204,203,200_Is religion good or bad? This sound bite of a question dominates much of what passes for public discussion of religion in the United States. When the soi-disant New Atheists took the bestseller lists by storm in the first decade of the new millennium with titles like The End of Faith(2004), The God Delusion (2006), Breaking the Spell (2006), and God Is Not Great (2007), it was because they focused almost exclusively on the capacity of religion to generate violence. This wasn’t surprising, considering that since 9/11 we have lived in a world newly conscious of the geopolitical power of piety. Defenders of faith have of necessity adopted the same focus, albeit to opposite ends. “The idea that religion has a tendency to promote violence is part of the conventional wisdom of Western societies,” writes William Cavanaugh in his revealingly titled The Myth of Religious Violence (2009). Karen Armstrong sharpens the point in the opening paragraph of Fields of Blood, her new inquiry into the relationship between religion and violence: “Modern society has made a scapegoat of faith.”

If by “modern society” Armstrong means the New Atheists and their handful of vocal followers, then maybe she is right. But her claim should seem either polemical or naïve to anyone living not only in the United States, where a large majority of citizens believe in heaven and hell, but also in countries governed by parties with names like the Christian Democratic Union (Germany) or the Pakistan Muslim League. A visitor from outer space (or a reader of surveys) might be forgiven for thinking—as he, she, or it tours the burgeoning churches of the former Soviet bloc; skims the blogs, newspapers, and TV channels of the Islamic world; or listens on a universal translator to the speeches of politicians across Europe and the Americas—that modern society is, to the contrary, a haven for the faithful. But even assuming that religion is increasingly powerful rather than embattled, the polarizing question at the center of Cavanaugh’s and Armstrong’s broadsides remains important: Is the promotion of violence inherent to any religion, or is violence committed in the name of religion a mutation or betrayal of an inherently benevolent faith?

More here.