Jane Alexander. Butcher Boys. 1985-86.
Sculpture using plaster, bone, horn, oil paint, wood.
Here is what Dr. Lalami had to say about them:
The finalists for this year’s 3QD prize write in very different genres, but they were
all very impressive, which made the task of choosing just three difficult indeed.
Here are my selections:
Namit Arora’s powerful review of Omprakash Valmiki’s Joothan: A Dalit’s Life for 3
Quarks Daily places this 1997 memoir in a personal, cultural, and literary context.
Arora gives a very moving portrayal of a kind of life I knew little about, an honest
reckoning of the privileges of his own upbringing, and a thoughtful analysis both of
Valmiki’s work in Hindi and its translation into English.
All too often, the subject of race is felt to be the sole purview of people of color—as if
white people were completely unaffected by racial history or reality. Edan Lepucki’s
candid piece for The Millions, in which she discusses her exposure to questions of
race and slavery through various novels, shows us how literature, which requires us
to have imaginative empathy, can also help us develop actual empathy.
Elliot Colla’s analysis of Egyptian revolutionary slogans for Jadaliyya is both
sensitive and original. In discussing how poetry is created, performed, and
remembered—not just right now in Tahrir Square, but also during earlier historical
periods—he reminds us that literature and life are not distinct or divergent spheres,
but indivisible aspects of the human experience.
Congratulations from 3QD to the winners (I will send the prize money later today or tomorrow–and remember, you must claim the money within one month from today–just send me an email). And feel free to leave your acceptance speech as a comment here! And thanks to everyone who participated. Thanks also, of course, to Laila Lalami for doing the final judging.
The three prize logos at the top of this post were designed, respectively, by Carla Goller, me and Sughra Raza. I hope the winners will display them with pride on their own blogs!
Details about the prize here.
Over at the Boston Review, there is a forum on the promise of applying the lessons of behavioral economics to challenges in development, with the lead piece by Rachel Glennerster and Michael Kremer and comments from: Diane Coyle; Eran Bendavid; Pranab Bardhan; José Gómez-Márquez; Chloe O’Gara; Jishnu Das, Shantayanan Devarajan, and Jeffrey S. Hammer; and Daniel N. Posner. From the lead piece:
According to a standard economic model, a fourteen-year-old girl in Kenya will go to school if doing so will enable her to earn more than she spent on her education. A family will buy dilute-chlorine solution, measure out capfuls to treat their water, and wait for the chlorine to disinfect their water if the health benefits exceed the cost of the chlorine. Since a school uniform that lasts a year or two costs only six dollars, and a month’s supply of chlorine runs about $0.30, these costs should be fairly minor factors. Influenced in part by these arguments, many governments in the developing world and nongovernmental organizations (NGOs) concerned with development have maintained small charges for education and preventative health care.
However, in recent decades economists have increasingly come to recognize what most of us have long known: human beings don’t always make the best decisions.
A new type of economics, dubbed “behavioral economics,” seeks to understand deviations from the simple “rational agent” model that has dominated economics for most of its history—why people procrastinate, say, or why Americans don’t exercise or save enough.
In the developed world, these ideas are beginning to affect policy. For instance, the Pension Protection Act of 2006 encourages U.S. employers to establish automatic enrollment for retirement plans. Could such approaches help alleviate poverty in developing countries? If policies based on behavioral economics can help Americans save more, could they also help Indian children get vaccinated or Kenyan children get cleaner water?
Evidence from randomized evaluations in the developing world suggests they might.
Julia Felsenthal in Slate:
Slate tech columnist Farhad Manjoo argued recently that the fax machine lives on largely because of our attachment to the written signature. Manjoo's observation piqued the Explainer's curiosity: When did scribbling your name on a piece of paper become a means of authentication?
A long time ago. Signatures on written transactions have been customary in Jewish communities since about the second century and among Muslims since the Hegira (the migration of Muhammad and his followers to Medina) in 622. In Europe, the signature dates to the sixth century. But it didn't catch on widely there for another thousand-odd years, until the 16th and 17th centuries, when education and literacy were on the rise and more agreements were made in writing. In England, the 1677 Statute of Frauds—which stipulated that contracts must exist in writing and bear a signature—was pivotal. Signatures became a standard form of validating agreements—a practice that was also adopted in colonial America.
Between the sixth century, when signatures first appeared, and the 17th century, when signing became standard practice, Europeans used various customs to formalize contracts. Wax seals bearing an impressed or embossed figure were common, particularly among the French, who brought the tradition to England during the Norman invasion. (Seals also appear in the Bible, and to this day sealing, not signing, is standard practice in China, Japan, and Korea.) One popular way to create these impressions was to press a signet ring into beeswax. Signet rings themselves were also used as validation: A king might, for example, dispatch a herald bearing an oral message to a foreign power, and give him the royal signet ring so that the message's recipient would be confident of its origin.
Daisy Rockwell over at Bookslut:
A bored young woman walks from room to room in her beautiful house. She sprawls on her bed and leafs through a novel, then wanders to the living room and looks through the bookcases for a new book. Suddenly she hears an interesting sound. She rushes to the windows and peers through the slats in the dark shutters. She sees a performer with a monkey, then some men carrying a palanquin, then a foolish man waddling along with an umbrella. Excited, she moves from shutter to shutter to peer through as these characters cross back and forth through her line of vision. She fetches a lorgnette, so she can see them better. When all disappear from view, she walks back through the living room, still holding up the lorgnette and stands on the porch that encloses the house’s interior verandah. Her husband walks by, fetches a book, and walks back again. He doesn’t notice her. She focuses her lorgnette on him. As his figure recedes into a different part of the house, her hand, still holding the lorgnette, drops to her side, and the camera zooms abruptly away from her.
Satyajit Ray’s film Charulata (1964) tells the story of a lonely young housewife whose distracted husband fails to notice until too late that she has fallen in love with his younger brother. The film is based on Rabindranath Tagore’s Bengali novella Broken Nest (Nashtaneer), which, along with two other Tagore novellas, Two Sisters (Dui Bon) and The Orchard (Malancha), about complicated marriages, has just come out in an excellent new translation by Arunava Sinha as the collection Three Women.
John D. Boy over at The Immanent Frame:
The concept is not just all over The Immanent Frame. It has also appeared in the titles of about forty books, most in English and German, the majority of which were published within the past five years. Additionally, the concept features prominently in seventeen dissertations indexed by ProQuest, which largely reflects dissertations completed at North American universities. More than half of these dissertations were deposited after 2007. And that is to say nothing of the dozens of articles in scholarly journals that are an important part of the discussion of the postsecular, or the approximately half-dozen academic conferences held on both sides of the Atlantic in the last three years. These numbers indicate that both established and emerging scholars are staking their work on the concept of the postsecular. Finally, illustrating a broader trend in intellectual debate, significant interventions in the discussion have also appeared online, especially at Eurozine, ResetDOC, and on this very blog.
For over a decade now, the concept has been appearing at an ever increasing rate in academic debates in a number of different areas. The watershed event for several of these debates was a speech given by the renowned German philosopher and sociologist Jürgen Habermas on the occassion of his being awarded the Peace Prize of the German Book Trade, in October 2001. However, Habermas’s speech, called “Faith and Knowledge,” is not the only impetus behind these discussions. In fact, some uses of the postsecular predate his speech, and they range across a wide variety of literatures. A few months ago, I tried to get as comprehensive an overview as possible and found that the concept has been used in cultural and literary studies, theology, philosophy, sociological theory and the sociology of religion, political theory, postcolonial thought, feminist thought, and even in urban studies. Reflecting the challenging reality of interdisciplinary work, some of the recurring themes in discussions of the postsecular are exploration, mapping, positionings, going “beyond” something, or being “between” two things. In what follows, I want to briefly summarize some of the recurring themes in these explorations.
From The Washington Post:
On Sept. 1, 1923, a 7.9-magnitude temblor struck Tokyo. More than 100,000 people lost their lives and more than 3 million were left homeless in the Great Kanto Earthquake. Fueled by rumors that ethnic Koreans were poisoning water wells, mobs killed thousands of Koreans in the days that followed. The Japanese government declared martial law, but the civilian authorities’ inability to deal with the disaster contributed to an eventual military takeover. Seventy-one years later, on Jan. 17, 1995, Kobe was hit by a 6.9-magnitude quake. The Great Hanshin Earthquake killed 6,400 people. Damage was estimated at more than $100 billion, or 2.5 percent of Japanese national income — similar to current estimates of the toll of last week’s 9.0-magnitude temblor in the Tohoku region of northern Japan. Yet, within 18 months, economic activity in Kobe had reached 98 percent of its pre-quake level. A state-of-the-art offshore port facility was built, housing was modernized — and a scruffy port city became an international showpiece. It is tempting to regard the different responses to these tragedies as proof that a more advanced society will respond more constructively to adversity. The simpler truth is that disasters can quickly transform a nation — for better, or for worse.
Which way will Japan go?
The March 11 earthquake and tsunami devastated a society that, for all its wealth, was stuck in a rut. Over the past two decades, Japan’s economic growth averaged an anemic 1 percent a year. Politically, the country was rudderless. The Liberal Democratic Party, which had governed almost continuously since the end of the U.S. military occupation following World War II, had finally worn out its welcome. And the novice Democratic Party of Japan, which had assumed power in 2009, was flailing.
More here.
From The Boston Globe:
As the nuclear crisis in Japan unfolded last week, experts scrambled to understand why things were going so horribly wrong. While no one was surprised that a 9.0 earthquake and a massive tsunami had caused severe and complicated problems, critics charged that various aspects of the Fukushima Daiichi plant’s design had made the catastrophe more perilous than it had to be. Some considered the particulars: Why had the cooling system’s backup generators been installed in a way that left them vulnerable to the tsunami? Why did the reactors use a cost-saving containment vessel whose disaster-worthiness had been repeatedly questioned by scientists? Why had the pool of spent fuel rods overheated?
For those taking a longer view, however, there is a larger question looming over the disaster: Why was Japan, a nation at high risk for earthquakes and natural disasters, using a type of reactor that needed such active cooling to stay safe? And the answer to that doesn’t lie with Japan, or the way the plant was built. The problem lies deeper, and concerns the entire nuclear industry. Japan’s reactors are “light water” reactors, whose safety depends on an uninterrupted power supply to circulate water quickly around the hot core. A light water system is not the only way to design a nuclear reactor. But because of the way the commercial nuclear power industry developed in its early years, it’s virtually the only type of reactor used in nuclear power plants today. Even though there might be better technologies out there, light water is the one that utility companies know how to build, and that governments have historically been willing to fund.
More here.
First, Phyllis Bennis in The Real News:
While the UN resolutionwas taken in the name of protecting civilians, it authorizes a level of direct U.S., British, French, NATO and other international military intervention far beyond the”no-fly zone but no foreign intervention” that the rebels wanted. Its real goal, evident in the speeches that followed the Security Council's March 17th evening vote, is to ensure that “Qaddafi must go,” – as so many ambassadors described it. Resolution 1973 is about regime change, to be carried by the Pentagon and NATO with Arab League approval, instead of by home-grown Libyan opposition.
The resolution calls for a no-fly zone, as well as taking “all necessary measures… to protect civilian populated areas under threat of attack in the Libyan Arab Jamahiriya, including Benghazi, while excluding a foreign occupation force of any form on any part of Libyan territory.” The phrase “all necessary measures” is understood to include air strikes, ground, and naval strikes to supplement the call for a no-fly zone designed to keep Qaddafi's air force out of the skies. The U.S. took credit for the escalation in military authority, with Ambassador Susan Rice as well as other Obama administration officials claiming their earlier hesitation on supporting the UN resolution was based on an understanding of the limitations of a no-fly zone in providing real protection to [in this case Libyan] civilians. It's widely understood that a no-fly zone is most often the first step towards broader military engagement, so adding the UN license for unlimited military escalation was crucial to getting the U.S. on board. The”all necessary measures” language also appears to be the primary reason five Security Council members abstained on the resolution.
Rodger A. Payne over at Duck of Minerva responds:
Certainly, opponents of the no fly zone want to frame the debate around regime change in order to question the legitimacy of the intervention. For instance, Phyllis Bennis of the Institute for Policy Studies asserts that “it's widely understood that a no-fly zone is most often the first step towards broader military engagement.” However, I would challenge that view. The U.S. for many years helped enforce a no-fly zone in Iraq that was eventually controversial and certainly was not the key stepping stone that legitimized war in Iraq. The Bush administration likely would have pursued war on Iraq even without a no fly zone. And much of the world opposed the war in Iraq precisely because it violated international norms about the use of force.
Bennis also worries about the authorization of “all necessary measures..to protect civilians and civilian populated areas under threat of attack in the Libyan Arab Jamahiriya, including Benghazi, while excluding a foreign occupation force of any form on any part of Libyan territory.” She sees this as a virtual blank check for broader military intervention, though she overlooks the last clause.
Pico Iyer in the NYRB:
It’s been startling to witness mass demonstrations in countries across the Middle East for freedom from autocracy, while, in the Tibetan community, a die-hard champion of “people power” tries to dethrone himself and his people keep asking him to stay on. Again and again the Dalai Lama (who tends to be more radical and less romantic than most of his followers) has sought to find ways to give up power, and his community has sought to find ways to ensure he can’t. It could be said that almost the only time Tibetans don’t listen to the Dalai Lama is when he tells them they shouldn’t listen to him. Now, on the eve of an important election for Tibet’s government-in-exile, he has announced he is relinquishing formal political authority entirely—and the Tibetan government has accepted his decision, even as the move has alarmed many around the world and struck some as the end of an era.
In truth, the Dalai Lama’s statement was merely a continuation—and a stronger expression—of what he has been saying for years: that political leadership for the Tibetan people (in exile at least) belongs with the democratically elected government-in-exile he has so painstakingly set up over decades in Dharamsala, in India (elections for a new prime minister are to be held March 20); that he will function only as a “senior advisor,” helping to oversee the transition to a post-Dalai Lama era; and, most important, that the spiritual and temporal sides of Tibetan rule will at last be separate. As he noted in the speech that mentioned his “retirement”—his annual state-of-the-nation address, in effect, delivered on March 10, the anniversary of the 1959 Tibetan uprising against the People’s Republic of China and a frequent day of protest—he has believed, since childhood, that church and state should not be one and that the fate of Tibet should be in the hands of all Tibetans.
Joe Nocera in the NYT:
The piñata sat alone at the witness table, facing the members of the House subcommittee on financial institutions and consumer credit.
The Wednesday morning hearing was titled “Oversight of the Consumer Financial Protection Bureau.” The only witness was the piñata, otherwise known as Elizabeth Warren, the Harvard law professor hired last year by President Obama to get the new bureau — the only new agency created by the Dodd-Frank financial reform law — up and running. She may or may not be nominated by the president to serve as its first director when it goes live in July, but in the here and now she’s clearly running the joint.
And thus the real purpose of the hearing: to allow the Republicans who now run the House to box Ms. Warren about the ears. The big banks loathe Ms. Warren, who has made a career out of pointing out all the ways they gouge financial consumers — and whose primary goal is to make such gouging more difficult. So, naturally, the Republicans loathe her too. That she might someday run this bureau terrifies the banks. So, naturally, it terrifies the Republicans.
The banks and their Congressional allies have another, more recent gripe. Rather than waiting until July to start helping financial consumers, Ms. Warren has been trying to help them now. Can you believe the nerve of that woman?
Mike Fleming in Deadline:
Miral, an adaptation of Rula Jebreal's coming of age story of an orphaned Palestinian girl growing up in the wake of the Arab-Israeli War, has become a hot button film for director Julian Schnabel. When the painter/filmmaker showed the film to the MPAA, he got an R for upsetting images (he was able to have the rating overturned to PG-13). Before showing it at the United Nations this week, he had to first respond to a public letter of protest from the American Jewish Committee. Here, Schnabel discusses his personal awakening to Israel's controversial settlement policy, one he feels has turned Palestinians into second class citizens in the name of security.
DEADLINE: Were any members of the American Jewish Committee at the screening?
SCHNABEL: I asked from the stage and no one responded. I invited them and thought it would be good for them to see it. It was such a beautiful evening, a 45-foot screen in the middle of the General Assembly. There were 1600 people. Robert De Niro, Sean Penn, Steve Buscemi and Josh Brolin showed up in solidarity, along with artists like Ross Bleckner, David Salle. And Vanessa Redgrave, who a long time ago got in a lot of trouble for saying something in support of Palestinian people at the Oscars. I remember being a kid when she said that, and everybody being so pissed off, saying just because you’re an actress getting an award doesn’t mean you should have a thought or a political point of view. I think Paddy Chayevsky said that. He was a brilliant, but it’s very easy to pick on somebody who speaks up.
More here.
Shadi Hamid in Democracy Arsenal:
Finally, after much “dithering” – which seems to be the consensus word choice for Obama's sputtering Mideast policy – the US has finally suggested that it can, sometimes, do the right thing, even if it does it three weeks later (I looked back to see when I had written my Slate article calling for international intervention – February 23).
The arguments against military intervention struck me as surprisingly weak and almost entirely dependent on raising the spectre of Iraq and Afghanistan. It was somewhat unclear how and why Iraq 2003 should be compared to Libya 2011. Michael Cohen, whose preference for foreign policy restraint is admirable, worried recently that John McCain and Joe Lieberman's support for a no-fly zone portended bad things to come. Just because McCain and Lieberman support something doesn't automatically mean it's bad.
Cohen writes that Iraq and Afghanistan “are daily reminders that the use of U.S. military force can have unforeseen and often unpredictable consequences.” Yes, but that's sort of the point with bold action. It's supposed to be risky (in fact, if it's not, you may not be going far enough). Success isn't guaranteed. And no one is pretending that a positive outcome in Libya is a foregone conclusion now that the UN Security Council has adopted a resolution authorizing military force. But it does make a successful outcome more likely.
From Nature:
A pianist plays a series of notes, and the woman echoes them on a computerized music system. The woman then goes on to play a simple improvised melody over a looped backing track. It doesn't sound like much of a musical challenge — except that the woman is paralysed after a stroke, and can make only eye, facial and slight head movements. She is making the music purely by thinking.
This is a trial of a computer-music system that interacts directly with the user's brain, by picking up the tiny electrical impulses of neurons. The device, developed by composer and computer-music specialist Eduardo Miranda of the University of Plymouth, UK, working with computer scientists at the University of Essex, should eventually help people with severe physical disabilities, caused by brain or spinal-cord injuries, for example, to make music for recreational or therapeutic purposes. The findings are published online in the journal Music and Medicine1. “This is an interesting avenue, and might be very useful for patients,” says Rainer Goebel, a neuroscientist at Maastricht University in the Netherlands who works on brain-computer interfacing.
More here.
From The New York Times:
The universe, the 18th-century mathematician and philosopher Jean Le Rond d’Alembert said, “would only be one fact and one great truth for whoever knew how to embrace it from a single point of view.” James Gleick has such a perspective, and signals it in the first word of the title of his new book, “The Information,” using the definite article we usually reserve for totalities like the universe, the ether — and the Internet. Information, he argues, is more than just the contents of our overflowing libraries and Web servers. It is “the blood and the fuel, the vital principle” of the world. Human consciousness, society, life on earth, the cosmos — it’s bits all the way down.
Gleick makes his case in a sweeping survey that covers the five millenniums of humanity’s engagement with information, from the invention of writing in Sumer to the elevation of information to a first principle in the sciences over the last half-century or so. It’s a grand narrative if ever there was one, but its key moment can be pinpointed to 1948, when Claude Shannon, a young mathematician with a background in cryptography and telephony, published a paper called “A Mathematical Theory of Communication” in a Bell Labs technical journal. For Shannon, communication was purely a matter of sending a message over a noisy channel so that someone else could recover it. Whether the message was meaningful, he said, was “irrelevant to the engineering problem.” Think of a game of Wheel of Fortune, where each card that’s turned over narrows the set of possible answers, except that here the answer could be anything: a common English phrase, a Polish surname, or just a set of license plate numbers. Whatever the message, the contribution made by each signal — what he called, somewhat provocatively, its “information” — could be quantified in binary digits (i.e., 1s and 0s), a term that conveniently condensed to “bits.”
More here.