JR. Action in Phnom Penh, House in the Water – close up, Old Station Habitations, Cambodia. 2009
Cibachrome print mounted on aluminium under perspex, framed.
JR has recently been named the 2011 recipient of the annual TED prize of $100,000.
Panopticonopolis over at The Pinprick of Desire:
Recently, I went to a panel discussion on urban agriculture at the Kellen Gallery, part of the Living Concrete/Carrot City exhibition. To anyone tuned into the food debate in the New York area, familiar sentiments were on display: scrappy entrepreneurs with a love for farming and/or eating, nurturing their holistic vision of a just society, itself replete with happy farmers tilling ever-healthier soil, which in turn produces nutritious fare for contented locavores, farmer’s-market enthusiasts, schoolkids, or [insert your constituency here], building greater community while lessening our carbon footprint, etc.
Being the happy curmudgeon, I was glad to sneak in the following during Q&A:
“Aerofarm is a startup that is only a few years old. Their model does not have anything to do with creating community, or building soil health, or even encouraging food awareness or organic agriculture. They have developed technology that allows people to grow food in enclosed spaces, without sunlight, without soil, even. They recently received $500,000 in seed funding from several venture capital groups. Is there room for everyone to play in this space, or does Aerofarm, etc represent a threat to the vision of (urban) agriculture as implicitly envisioned by this panel and the food movement as we know it in general?”
It was disappointing to see the panel – or what takers there were – squirm their way through this question. One posited that “hydroponic agriculture” is a tremendous waste of resources. This may be or may not be true, but the fact is that Aerofarm is not a hydroponic operation. Another was concerned with the fact that Aerofarm was going to dilute the nascent “brand” of urban agriculture. No one really understood that my question was about money.
So what, exactly, does $500,000 “mean”? Let’s compare Aerofarm’s seed funding with the MacArthur Genius Grant, which itself is $500,000.
According to the MacArthur Foundation’s website, “MacArthur Fellows Program awards unrestricted fellowships to talented individuals who have shown extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction.” This essentially means that you have been busting your hump for a good chunk of your life before the Genius Grant smiles on you, eg: Will Allen, MacArthur 2008.
In fact, it was Cheryl Rogowski, not Allen, who was recognized as the first farmer-recipient by MacArthur, and that was in 2004. She has been farming since 1983. Wags might point out that people have been, in fact, farming for quite a bit longer than that. Nevertheless, in business school parlance, 20 years is one hell of a product development cycle.
Walter Benn Michaels in Le Monde Diplomatique:
Over the summer two stars of the American right had a friendly argument about who poses the greatest threat to the United States. Fox News host Bill O’Reilly went with the conventional wisdom: al-Qaida. During the Bush administration, it was the clash of cultures that organised the way American conservatives saw the world. When they worried about issues like illegal immigration, what they were afraid of was al-Qaida operatives mingling among the future valet parkers of Chicago and meatpackers of Iowa. But O’Reilly’s new colleague and ratings rival, Glenn Beck, had a more surprising answer: it’s not the jihadists who are trying to destroy our country, it’s the communists. When Beck and the Tea Party, the rightwing populists most closely tied to him, express their deepest worries, it’s not terrorism they fear, it’s socialism.
What’s surprising is that worrying about communists was more characteristic of the Eisenhower years than of post-9/11. Even more surprising is that Beck is a generation younger than O’Reilly. He hadn’t even been born in 1963 when Eisenhower’s secretary of agriculture, Ezra Taft Benson, gave the speech about Krushchev’s promise to keep “feeding us socialism” mouthful by mouthful until one day (today, according to Beck, who cites this speech frequently) we wake up and realise we’ve “already got communism”.
Most surprising of all is that this reinvention of the cold war is working. Tea Partiers rush to expose the communists in the Democratic Party; on Amazon’s bestseller lists, the highest ranking political book is FA Hayek’s The Road to Serfdom, and even the celebrated radio talk show host Rush Limbaugh has started worrying about the “communist” spies “who work for Vladimir Putin”.
Why communism? And why now? Islamophobia at least has some pretext based in reality: jihadists really did kill thousands of Americans. But not only were there no communists on the planes that hit the World Trade Centre, today there are virtually no communists anywhere in the US, and precious few in the former USSR. Indeed, if there’s one thing Vladimir Putin and Barack Obama can agree on, it’s their enthusiasm for what Putin (at Davos!) called “the spirit of free enterprise”. And yet, like anti-semitism without Jews, anti-communism without communists has come to play a significant political role on the right, especially on what we might call the anti-neoliberal right.
From the Op-Ed page of the New York Times:
Light Verse
It’s just five, but it’s light like six.
It’s lighter than we think.
Mind and day are out of sync.
The dog is restless.
The dog’s owner is sleeping and dreaming of Elvis.
The treetops should be dark purple,
but they’re pink.
Here and now. Here and now.
The sun shakes off an hour.
The sun assumes its pre-calendrical power.
(It is, though, only what we make it seem.)
Now in the dog-owner’s dream,
the dog replaces Elvis and grows bigger
than that big tower
in Singapore, and keeps on growing until
he arrives at a size
with which only the planets can empathize.
He sprints down the ecliptic’s plane,
chased by his owner Jane
(that’s not really her name), who yells at him
to come back and synchronize.
— VIJAY SESHADRI, author of “The Long Meadow”
More here.
Mohsin Hamid, on reading Antonio Tabucchi's novel, in the introduction to its new edition, at his own website:
I am sometimes asked to name my favourite books. The list changes, depending on my mood, the year, tricks played by memory. I might mention novels by Nabokov and Calvino and Tolkien on one occasion, by Fitzgerald and Baldwin and E. B. White on another. Camus often features, as do Tolstoy, Borges, Morrison, and Manto. And then I have my wild card, the one I tend to show last and with most pleasure, because it feels like revealing a secret.
Sostiene Pereira, I say, by Antonio Tabucchi.
These words are usually greeted with one of two reactions: bewilderment, which is far more common, or otherwise a delighted and conspiratorial grin. It seems to me that Pereira is not yet widely read in English, but holds a heroin-like attraction for those few who have tried it.
My own Pereira habit began a decade ago, in San Francisco’s City Lights bookstore, where an Italian girlfriend suggested I give it a try. San Francisco was the perfect place for my first read: its hills and cable cars and seaside melancholy were reminiscent of Pereira’s Lisbon setting; its Italian heritage, from the Ghirardelli chocolate factory at its heart to the wine valleys surrounding it, evoked Pereira’s Italian author; and its associations with sixties progressivism and forties film noir went perfectly with Pereira’s politics and pace.
More here.
Ian Crouch in The New Yorker:
A hinting story, Swartwood explains, should do in twenty-five words what it could do in twenty-five hundred, that is, it “should be complete by standing by itself as its own little world.” And, like all good fiction, it should tell a story while gesturing toward all the unknowable spaces outside the text.
The book is divided into three sections: “life & death,” “love & hate,” and “this & that.” Several stories too fully embrace the gimmick, becoming tiny O. Henry tales complete with tidy setups and kickers. Something about the space constraints make the stories go for too much, rejecting intimacy for some trumped up idea of scale. The best, however, share an off-beat and generally macabre sensibility. Here are two good examples:
“Blind Date,” by Max Barry.
She walks in and heads turn. I’m stunned. This is my setup? She looks sixteen. Course, it’s hard to tell, through the scope.
“Houston, We Have a Problem,” by J. Matthew Zoss.
I’m sorry, but there’s not enough air in here for everyone. I’ll tell them you were a hero.
Violence is a lingering theme, often conveyed with a power that lasts long after the short time it takes to read these tales. Take “Cull,” By L. R. Bonehill, a compressed post-apocalyptic snapshot:
There had been rumors from the North for months. None of us believed it, until one night we started to kill our children too.
More here.
At the Edge of the Beach
We are at the end of the world, Mare and I,
at the rim of the ice dark Atlantic, its chill curls
lapping at our toes.
I’ve spent July with sweaty arms tight round my waist, and Mary claims I taught her all the words
she’s not supposed to know. She’s going to marry Christ, and not some ordinary boy.
Late summer’s lick of winter ruffles our hair. We should go in.
I say how romantic those shacks out along the point, Mare, how poignant they are, strung to their
utility poles.
Laur, she says sensibly, it would be prettier without them.
We sat on that curve of beach, when we were twelve,
where civilization crept out among the sand pipers
on sad loops of utility lines.
by Laurie Joan Aron
Paul McDonough in The Paris Review:
What turned me away from painting was a realization that the streets and parks of Boston provided me with subject matter that I could not conjure up in my studio. At that point, a blank canvas drew nothing but a blank stare. So, with a newly purchased 35mm Leica loaded with tri-x film, I began my forays into downtown Boston to photograph. The kind of photographs I took then related to my art school days, when I would amble around the city making quick pencil sketches of people on park benches and subways. After roaming around Vermont in the summer of 1964, I decided to move to Cambridge, MA where I took a full-time job in a commercial art studio. I was by this time married to my first wife and our plan was to save up enough to live for a year in Europe. Instead we wound up in New York, arriving by U-Haul in the summer of 1967. Rents were cheap, and we could now get by on my part-time work in advertising studios. I had abundant free time, and I took full advantage of it.
It was the sheer quantity of people on the street that made the spectacle unique. There were so many opportunities; you had to be perpetually alert and believe something was going to happen. You were not looking for photographs, but for the raw material that would make you want to photograph; the gesture or expression that demanded to be recorded. You were in the moment and you didn’t judge or qualify. For example, in the 1973 photograph taken at a parade, two business men are perched like statues on standpipes, trying to see over the heads of the crowd that had momentarily parted. They were serious; they had a sense of purpose. About what, the photograph doesn’t give a clue. That information is outside the frame’s viewpoint and beyond the camera’s scope.
More here.
From The Boston Globe:
Change, in politics, is a lyrical and seductive tune. Think about Woodrow Wilson’s New Freedom, or Franklin Roosevelt’s New Deal; how Ronald Reagan greeted us with ”Morning in America,” or how Barack Obama ran an entire presidential campaign around the theme of ”change.” To listen to the victory speeches delivered on Election Day last week, one might start to believe that change is in the air again. Certainly, candidates across the country ran–and won–on the promise of changing Washington. But anyone counting on a radical transformation in government should steel themselves for another round of heartbreak come January, when the new Congress takes office: Their leadership is no more likely to revolutionize government than Obama’s did in 2008, or the long line of presidents and congresses before them.
We might feel frustrated at this inaction, or relieved, depending on our politics. But what we shouldn’t feel is surprised. Because no matter how much politicians love to serenade us to the tune of change, and no matter how happy we are to flirt right back, our governmental system was designed to prevent seismic change from happening. It’s easy to see this as a flaw, or as a failure of the politicians we elect, but that would be wrong: In fact, the people to blame are the Founders of our republic. When they wrote the Constitution, setting out how power would be wielded, shared, and transferred, they did it specifically to prevent radical change. By conscious and deliberate design, our system favors incremental changes over the kind of revolutionary change that politicians love to promise. And 220 years of history, so far, suggest that that has been a very good thing indeed.
More here.
John Allen Paulos in his Who's Counting column at ABC News:
To obtain a fair result from a biased coin, the mathematician John von Neumann devised the following trick. He advised the two parties involved to flip the coin twice. If it comes up heads both times or tails both times, they are to flip the coin two more times.
If it comes up H-T, the first party will be declared the winner, while if it comes up T-H, the second party is declared the winner. The probabilities of both these latter events (H-T and T-H) are the same because the coin flips are independent even if the coin is biased.
For example, if the coin lands heads 70 percent of the time and tails 30 percent of the time, an H-T sequence has probability .7 x .3 = .21 while a T-H sequence has probability .3 x .7 = .21. So 21 percent of the time the first party wins, 21 percent of the time the second party wins, and the other 58 percent of the time when H-H or T-T comes up, the coin is flipped two more times.
More here.
Garth Risk Hallberg in The Millions, via Andrew Sullivan:
One opens The Atlantic Monthly and is promptly introduced to a burst of joyless contrarianism. Tiring of it, one skims ahead to the book reviews, only to realize: this is the book review. A common experience for even the occasional reader of B.R. Myers, it never fails to make the heart sink. The problem is not only one of craft and execution. Myers writes as if the purpose of criticism were to obliterate its object. He scores his little points, but so what? Do reviewers really believe that isolating a few unlovely lines in a five hundred page novel, ignoring the context for that unloveliness, and then pooh-poohing what remains constitutes a reading? Is this what passes for judgment these days?
If so, Myers would have a lot to answer for. But in the real world, instances don’t yield general truths with anything like the haste of a typical Myers paragraph (of which the foregoing is a parody). And so, even as he grasps for lofty universalism, Brian Reynolds Myers remains sui generis, the bad boy of reviewers, lit-crit’s Dennis Rodman.
Myers came to prominence, or what passes for it in the media microcosmos, via “A Reader’s Manifesto,” a long jeremiad against “the modern ‘literary’ best seller” and “the growing pretentiousness of American literary prose.” It earned notice primarily for its attack on the work and reputation of novelists lauded for their style – Cormac McCarthy, Don DeLillo, and E. Annie Proulx, among others. Many of these writers were ripe for reevaluation, and “A Reader’s Manifesto” was read widely enough to land Myers a contributing editor gig at The Atlantic. It was subsequently published as a stand-alone book. Yet the essay was itself little more than an exercise in style, and not a very persuasive one at that. It was hard to say which was more irritating: Myers’ scorched-earth certainties; his method, a kind of myopic travesty of New Criticism; or his own prose, a donnish pastiche of high-minded affectation and dreary cliché.
I can’t be the only reader who wanted to cry out against the manifesto being promulgated on my behalf, but Myers had insulated himself in several ways. First, he had been so thoroughgoingly tendentious, and at such length, that to rebut his 13,000 words required 13,000 of one’s own. Second: his jadedness was infectious. It made one weary of reading, weary of writing, weary of life.
Alice Bell over at her blog:
Every now and again, the term “scientific literacy” gets wheeled out and I roll my eyes. This post is an attempt to explain why.
The argument for greater scientific literacy is that to meaningfully participate, appreciate and even survive our modern lives, we all need certain knowledge and skills about science and technology. Ok. But what will this look like exactly, how will you know what we all need to know in advance and how on earth do you expect to get people trained up? These are serious problems.
Back in the early 1990s, Jon Durant very usefully outlined out the three main types of scientific literacy. This is probably as good a place to start as any:
* Knowing some science – For example, having A-level biology, or simply knowing the laws of thermodynamics, the boiling point of water, what surface tension is, that the Earth goes around the Sun, etc.
* Knowing how science works – This is more a matter of knowing a little of the philosophy of science (e.g. ‘The Scientific Method’, a matter of studying the work of Popper, Lakatos or Bacon).
* Knowing how science really works – In many respects this agrees with the previous point – that the public need tools to be able to judge science, but does not agree that science works to a singular method. This approach is often inspired by the social studies of science and stresses that scientists are human. It covers the political and institutional arrangement of science, including topics like peer review (including all the problems with this), a recent history of policy and ethical debates and the way funding is structured.
The problem with the first approach is what IB Cohen, writing in the 1950s, called “The fallacy of miscellaneous information”: that a set of often unrelated nuggets of information pulled from the vast wealth of human knowledge is likely to be useful in everyday life (or that you'll remember it when it happens to be needed). That's not to say that these bits of knowledge aren't useful on occasion. Indeed, I remember my undergraduate science communication tutor telling us about how she drowned a spider in the toilet with a bit of basic knowledge of lipids and surface tension. However, it's unrealistic to list all the things a modern member of society might need to know at some point in their life, get everyone to learn them off in advance and then wash our hands of the whole business. This is especially problematic when it comes to science, as such information has the capacity to change (or at least develop). Instead, we all need access to useful information when it is needed.
[H/t: Jennifer Ouellette]
Jaswant Singh in Project Syndicate:
Barack Obama, the sixth American president to visit India since it gained independence, arrives at a trying time, both for the United States and for India. Some of Obama’s closest advisers have just resigned, opening an awkward gap on national security and the economy – the focus of his meetings with India’s government.
For India, the issues on the agenda for Obama’s visit are immense and complex, and the options for resolving them are extremely limited. Those related to security in Afghanistan and Pakistan are as treacherous as they have ever been. Bilateral economic, trade, and currency disagreements may not be as bitter as they are between the US and China, but they are thorny, and lack of resolution is making them more intractable.
Nuclear non-proliferation remains one of Obama’s priorities, as does the sale of US civilian nuclear technology to India, for which former President George W. Bush cleared the way. And Obama will be keen to know what help India can provide with Iran, a country with which India has smooth relations, owing to their shared worries over Afghanistan and Pakistan.
Given this potent list of challenges, what are the prospects for Obama’s passage to India? Some years ago, I was queried by then US Deputy Secretary of State Strobe Talbott, who was helping to prepare President Bill Clinton’s visit. As India’s foreign minister at the time, I told him: “Why make the visit destinational? Be content with the directional,” or some such words. That response retains its flavor today: as new directions in India-US relations are set, new destinations will follow.
Eduardo Mendieta in The Immanent Frame:
The centrality of religion to social theory in general and philosophy in particular explains why Jürgen Habermas has dealt with it, in both substantive and creative ways, in all of his work. Indeed, religion can be used as a lens through which to glimpse both the coherence and the transformation of his distinctive theories of social development and his rethinking of the philosophy of reason as a theory of social rationalization.
For Habermas, religion has been a continuous concern precisely because it is related to both the emergence of reason and the development of a public space of reason-giving. Religious ideas, according to Habermas, are never mere irrational speculation. Rather, they possess a form, a grammar or syntax, that unleashes rational insights, even arguments; they contain, not just specific semantic contents about God, but also a particular structure that catalyzes rational argumentation.
We could say that in his earliest, anthropological-philosophical stage, Habermas approaches religion from a predominantly philosophical perspective. But as he undertakes the task of “transforming historical materialism” that will culminate in his magnum opus, The Theory of Communicative Action, there is a shift from philosophy to sociology and, more generally, social theory. With this shift, religion is treated, not as a germinal for philosophical concepts, but instead as the source of the social order. This approach is of course shaped by the work of the classics of sociology: Weber, Durkheim, and even Freud. What is noteworthy about this juncture in Habermas’s writings is that secularization is explained as “pressure for rationalization” from “above,” which meets the force of rationalization from below, from the realm of technical and practical action oriented to instrumentalization. Additionally, secularization here is not simply the process of the profanation of the world—that is, the withdrawal of religious perspectives as worldviews and the privatization of belief—but, perhaps most importantly, religion itself becomes the means for the translation and appropriation of the rational impetus released by its secularization. Here, religion becomes its own secular catalyst, or, rather, secularization itself is the result of religion.
Meghan O'Rourke in Slate:
There are few writers as suited to writing insightfully about loss as the mature Barthes was. Grief is at once a public and a private experience. One's inner, inexpressible disruption cannot be fully realized in one's public persona. As a brilliant explicator of how French culture shapes its self-understanding through shared “signs,” Barthes was primed to notice the social dynamics at play among friends and colleagues responding to his bereavement. As an adult son whose grief for his beloved mother—he lived with her and said she provided his “internalized law”—was unusually acute, he was subject to grief's most disorienting intensities. The result is a book that powerfully captures, among other things, the shiver of strangeness that a private person experiences in the midst of friends trying to comfort or sustain him in an era that lacks clear-cut rituals or language for loss.
In one of Mourning Diary's first entries, Barthes describes a friend worrying that he has been “depressed for six months.” (His mother was ill before her death.) It was “said discreetly, as always,” Barthes notes. Yet the statement irritates him: “No, bereavement (depression) is different from sickness. What should I be cured of? To find what condition, what life? If someone is to be born, that person will not be blank, but a moral being, a subject of value—not of integration.” Noting that “signs” fail to convey the private depths of mourning, he comments on the tension between others' expectant curiosity and the mourner's own suffering: “Everyone guesses—I feel this—the degree of a bereavement's intensity. But it's impossible (meaningless, contradictory signs) to measure how much someone is afflicted.”
Rachel Maddow and Amanda Marcotte discuss the elections, based on Amanda's piece at Slate:
Hussein Ibish over at his blog:
Some months ago my dear friend the great critical theorist R. Radhakrishnan suggested I pay some attention in writing to the phenomenon I discussed with him on several occasions whereby we respond emotionally, aesthetically or intellectually to cultural artifacts that we nonetheless do not, at a certain level, respect. In fact, we may know very well that a cultural product is inferior if not fundamentally absurd, and yet it may have a profound impact and even an irresistible draw to us. How and why does that operate? What's going on when we respond so powerfully at all kinds of levels to something we feel, whether on reflection or viscerally, is either completely or in some senses beneath contempt? How do we account for such “guilty pleasures?” Of course, this version of guilty pleasure is a subset of the deeper existential problem of why we want things that we know very well are bad for us: why we cling to, or mourn the loss of, dysfunctional relationships with toxic people; persist with, or pine for, self-destructive behavior of one kind or another; or find ourselves in the grip of a political or religious ideology we know very well, at a certain level at any rate, is indefensible and possibly loathsome. But for the meanwhile, let's stick to the subject of bad art.
I'm going to begin looking at this problem by taking on one of what has been, in my life at any rate, one of its more gruesome manifestations: films featuring the character James Bond and the Ian Fleming novels on which they are based. Let's be clear at the outset: on the whole and in most senses they are without question garbage, and toxic garbage at that. The films are militantly stupid and implausible, often insultingly so, distinctly racist and irredeemably sexist, and the novels even more racist and sexist (more about the dismal ideology at work in them a little later). And yet some of us are drawn to some of them in spite of having no respect for them whatsoever, and even finding them offensive. In particular the early Bond movies starring Sean Connery have a real pull on my imagination and I'd like to begin my exploration of the morphology of guilty pleasures by considering how on earth that could possibly be the case.
The Bond films are useful as a starting point because, for me at any rate, they point directly to one of the most important and powerful forces behind guilty pleasures of this kind: nostalgia.
From The Telegraph:
There’s a charming poem by Seamus Heaney about Socrates’ last day. It expresses a brief surprise that Socrates could believe in dreams. But the poet quickly acknowledges that the philosopher did live in a dream world. Bettany Hughes’s book leaves us in no doubt. The Hemlock Cup is a biography of Socrates, and also a lot more than that. Yes, it speculates on the walks he would have taken around the Agora in Athens (admittedly with bundles of suggestive evidence); it suggests just what the hemlock would have done to him; and it attributes Socrates’ habit of standing stock still for hours to cataleptic seizures. For all that, Hughes is more concerned with the philosopher’s time and place. As she unfolds the tale, she brings us an edited history of fifth-century BC Athens, too.
This isn’t padding, or even scene-setting (atmospheric though it always is). Without overstating the case, she shows how the city’s life runs alongside the philosopher’s, and then takes a different course. Socrates would always warn that an acquisitive life was not worth living and that the pursuit of gold is vacuous; meanwhile Athens revelled in becoming an empire, so it conquered more and mined more and showed off more. And then there was an attempt to colonise Sicily. Out of Athens and Socrates, the former emerges as the more tragic character, with its greed and its failure to learn from its wisest citizen until in the throes of its downfall.
More here.