the rage of franzen and kraus

Kraus-ViennaJacob Mikanowski at The Point Magazine:

The Kraus Project’s annotators are admirably upfront about Kraus’s failings, taking up the vexed question of his putative self-hatred head-on. Paul Reitter, who devoted a whole book to the subject, argues that Kraus was being deliberate with his stereotypes, strategically deploying an anti-Semitic discourse in order to critique it. This seems to me to be too clever an explanation by half. I think it more likely that he simply didn’t care. Kraus was a bully and a snob, a lover of Offenbach and a pursuer of aristocratic ladies. He affected the tastes of an older, landed generation, even as he scandalized their manners, and he elevated their haute-bourgeois prejudices into a dissident religion with the force of his personality. He was too caught up in his genius and gigantic self-worth to care about everyday politics, much less “discourse.”

So why has Franzen expended so much effort to bring him back? In a word: rage. Kraus taught Franzen how to be angry, and how to channel that anger at the world. He writes about this as if it was a revelation: “Anger descended on me so near in time to when I fell in love with Kraus’s writing that the two occurrences are practically indistinguishable.” Revisiting Kraus thirty years later gives Franzen an opportunity to vent about all his favorite subjects. He complains that Macs are too sleek, Twitter too shallow, France too pleasurable and book critics too nice. Some of his criticisms have a vaguely anti-capitalist tenor. For some reason, Jeff Bezos, intent on enserfing writers and critics alike with the power of Amazon’s (wholly mythical) “one-day free shipping,” emerges as one of his main villains. The “Internet” comes in for repeated beatings, for its “ninth-grade” social dynamics, its snarkiness and its tendency toward solipsism. That he is leveling these charges from the platform of a particularly bitter and minutiae-filled memoir goes blissfully unmentioned.

more here.

C.S. Lewis’s first love was poetry

Lewis-coverLaura C. Mallonee at Poetry Magazine:

In 1926, at the height of modernism’s golden age, a young C.S. Lewis and a few of his friends decided to play a literary prank. As told in Alister McGrath’s clear-eyed biography, they wrote a spoof of T.S. Eliot’s poetry and submitted it for publication atThe Criterion, where Eliot was editor. “My soul is a windowless façade,” the poem began, and went on to ruminate over the Marquis de Sade, upholstered pink furniture, and mint juleps. If the older poet took the bait and published the poem, Lewis, who was then 27 years old and a fellow at Magdalene College, would use the event “for the advancement of literature and the punishment of quackery.” If not, it might prove there was something more to modernist poetry than he thought.

But Eliot never answered Lewis’s letter, and looking back on the ruse now is like watching a mouse brazenly challenge a cat. Eliot was then at the pinnacle of his career, having already published Prufrock and Other Observations (1917) and The Waste Land (1922); the younger Lewis’s literary future was still nebulous. Eliot has been called the most important poet of the 20th century; few today are aware that Lewis, the mastermind behind The Chronicles of Narnia, wrote poetry at all. But poetry was his first love, and his devotion to the form will be officially honored this month with the unveiling of a monument at the Poets’ Corner in Westminster Abbey, 50 years after his death.

more here.

The miraculous novels and life of Penelope Fitzgerald

201346penelopeNeel Mukherjee at The New Statesman:

The word that immediately occurs to one when thinking of Penelope Fitzgerald’s last four novels – Innocence (1986), The Beginning of Spring(1988), The Gate of Angels (1990), and The Blue Flower (1995) – is “miraculous”. There is nothing quite like them in English literature: in fact, they are not really English novels at all, except in language. They are inexhaustible in their meanings; mysterious and oblique, even baffling, in craft, beauty and effect; and every reader who has come to them has asked, at one time or other, a variant of the question, “How is it done?”

In this first ever biography of Fitzgerald, which comes 13 years after her death, Hermione Lee, pointedly using the (madeup) words of Novalis in The Blue Flower as her epigraph (“If a story begins with finding, it must end with searching”), has set out to attempt some answers to that question. The result is a luminous masterpiece of life-writing.

more here.

Tuesday Poem

Poem
.

A man applying gold leaf
to the window's word backwards
combed his hair to charge
his gilder's tip with static,
so he charged his hair with gold.
He was electrically the gold-
haired father of the gold word
GOODS! and god of the store's
attractions. When he waved his maul-
stick I went in to buy
the body of his mystery.
.

by Alan Dugan
from New and Collected Poems 1091-1983
The Ecco Press 1983

The Fall of India’s Conscience

The biggest story in India right now is that one of the country’s most famous and controversial journalists stands accused of sexual assault. How did a man known for skewering the powerful end up this way?

Tunku Varadarajan in The Daily Beast:

ScreenHunter_419 Nov. 26 13.10Why is this story dominating the Indian news media? The most obvious reason is the identity of the accused. Tejpal is at the moral heart of the Indian elite, a man who has given his last 10 years to a (sometimes gaudy) fight against everything that is insufferable in India, and that includes sexual violence. In February of this year, the magazine produced a special issue on sexual violence with an electrifying cover. “I Am Every Woman,” it said, and Tehelka was widely lauded for its pugnacity. This right-on man is now accused of rape, and the news has come to everyone in the media—friends, colleagues, and competitors—as a tectonic jolt. (A nation accustomed to Tejpal the Crusader is stunned.)

“What made Tejpal do this?” everyone asks. Some answers suggest themselves, the most unsettling of which is that, for all his rhetoric in the cause of women, Tejpal is, perhaps, just another unreconstructed, predatory Indian male who was playing the part of PC editor for commercial effect.

More here.

Three Weeks Before Vladimir Nabokov’s Lolita, There Was Dorothy Parker’s

Galya Diment in Vulture:

DorothyBy 1955, the writing careers of Vladimir Nabokov and Dorothy Parker were headed in opposite directions. Parker’s was in a deep slump. The New Yorker—a magazine she had been instrumental in founding—had not published her fiction in fourteen years. Nabokov, by contrast, was becoming a literary sensation. The New Yorker had published several of his short stories as well as chapters of his autobiography Conclusive Evidence and of his novel Pnin. His next novel, Lolita, would bring him worldwide recognition for its virtuosic prose and the shocking story of a middle-aged man’s relationship with his pubescent stepdaughter and her aggressive mother. It was a manuscript that Nabokov circulated very little because he feared the controversy that would erupt when it was published. Yet three weeks before Lolita arrived in bookstores in France, where it first came out that September, Parker published a story—in The New Yorker, of all places—titled “Lolita,” and it centered on an older man, a teen bride, and her jealous mother. How could this have come to pass?

Nabokov had initially discussed his forthcoming book with his editor at The New Yorker, Katharine White, in 1953. “Don’t forget,” she wrote to him on ­November 4, “that you promised to let us read the manuscript … Sometimes a chapter or several chapters can be made into separate short stories.” At the end of that year, Nabokov’s wife, Véra, contacted White on his behalf: “He finished his novel yesterday,” Véra wrote to White, “and is bringing two copies, one for you, the other for the publisher. There are some very special reasons why the MS would have to be brought by one of us for there are some things that have to be explained personally.”

More here.

Families: our changing definition

Natalie Angier in The New York Times:

GayKristi and Michael Burns have a lot in common. They love crossword puzzles, football, going to museums and reading five or six books at a time. They describe themselves as mild-mannered introverts who suffer from an array of chronic medical problems. The two share similar marital résumés, too. On their wedding day in 2011, the groom was 43 years old and the bride 39, yet it was marriage No. 3 for both. Today, their blended family is a sprawling, sometimes uneasy ensemble of two sharp-eyed sons from her two previous husbands, a daughter and son from his second marriage, ex-spouses of varying degrees of involvement, the partners of ex-spouses, the bemused in-laws and a kitten named Agnes that likes to sleep on computer keyboards. If the Burnses seem atypical as an American nuclear family, how about the Schulte-Waysers, a merry band of two married dads, six kids and two dogs? Or the Indrakrishnans, a successful immigrant couple in Atlanta whose teenage daughter divides her time between prosaic homework and the precision footwork of ancient Hindu dance; the Glusacs of Los Angeles, with their two nearly grown children and their litany of middle-class challenges that seem like minor sagas; Ana Perez and Julian Hill of Harlem, unmarried and just getting by, but with Warren Buffett-size dreams for their three young children; and the alarming number families with incarcerated parents, a sorry byproduct of America’s status as the world’s leading jailer. The typical American family, if it ever lived anywhere but on Norman Rockwell’s Thanksgiving canvas, has become as multilayered and full of surprises as a holiday turducken — the all-American seasonal portmanteau of deboned turkey, duck and chicken.

…In increasing numbers, blacks marry whites, atheists marry Baptists, men marry men and women women, Democrats marry Republicans and start talk shows. Good friends join forces as part of the “voluntary kin” movement, sharing medical directives, wills, even adopting one another legally. Single people live alone and proudly consider themselves families of one — more generous and civic-minded than so-called “greedy marrieds.”

More here.

American Anarchist

Noam-Chomsky-2

Matthew Robare in The American Conservative:

The main currents of anarchist thought were derived from classical liberal ideas that emerged in the Enlightenment and the Romantic era. The central idea, Chomsky said, was that “institutions that constrain human development are illegitimate unless they can justify themselves.” Anarchists seek to challenge those institutions and dismantle the ones that cannot be justified, while creating new institutions from the ground up based on cooperation and benefits for the community. This tradition of libertarian socialism or anarcho-syndicalism was still alive, Chomsky claimed, despite challenges and suppression.

Paraphrasing the German-American anarchist Rudolf Rocker, Chomsky said that anarchism seeks to free labor from economic exploitation and society from ecclesiastical guardianship. This meant that workers struggle for their well-being and dignity—“for bread and roses,” as he put it—while rejecting the convention of working for others in exchange for money, which he described as a kind of slavery. The other opposition, to ecclesiastical guardianship, he explained as not necessarily an opposition to organized religion—he praised Dorothy Day’s Catholic Worker movement and the Christian anarchism of the Basque Country. Rather, Chomsky articulated an opposition to the idea that society should be regulated by an elite group, whether they are liberal technocrats, religious clerics, or corporate executives.

Chomsky also addressed some of the issues confronting anarchist activism, noting that while anarchists stand against the state, they often advocate for state coercion in order to protect people from “the savage beasts” of the capitalists, as he put it. Yet he saw this as not a contradiction, but a streak of pragmatism. “People live and suffer in this world, not one we imagine,” Chomsky explained. “It’s worth remembering that anarchists condemn really existing states instead of idealistic visions of governments ‘of, by and for the people.’”

More here.

Higgs boson book scoops Royal Society Winton Prize

Congratulations, Sean!

Long-time 3QD friend and supporter Sean Carroll has won the £25,000 Royal Society Winton Prize for his book The Particle at the End of the Universe. I am proud to say that I was one of the people Sean had sent the manuscript to for comment before publication and I loved it immediately. Here is what I had said on Facebook about it when it was published:

It is a tour de force of science writing. If you don't have a good understanding of what all the fuss was about when the LHC announced the discovery of the Higgs boson earlier this summer, you will after you read this brilliantly accessible account of the science behind the discovery and also all its attendant human drama.

Am very happy the Royal Society agreed! This is from the BBC:

ScreenHunter_418 Nov. 26 09.45Theoretical physicist Sean Carroll scoops the £25,000 award for his book The Particle at the End of the Universe.

His work beat five other titles that ranged across topics that broadly focussed on life in its many forms and its internal workings.

But the judges were unanimous in their decision to give Dr Carroll the prize.

Prof Uta Frith, from University College London and chair of the judges, said of the winning book: “It is an exceptional example of the genre and a real rock star of a book. Though it's a topic that has been tackled many times before.

“Carroll writes with an energy that propels readers along and fills them with his own passion. He understands their minds and anticipates their questions. There's no doubt that this is an important, enduring piece of literature.”

The prize was announced at the society's central London headquarters.

Dr Carroll said it was “completely unexpected”.

“It was a great thrill. I honestly thought of the six people in this room, anyone could have won.

More here.

In Defense of a Loaded Word

TaNehisi_img-articleInline

Ta-Nehisi Coates in the NYT:

It might be true that you refer to your spouse as Baby. But were I to take this as license to do the same, you would most likely protest. Right names depend on right relationships, a fact so basic to human speech that without it, human language might well collapse. But as with so much of what we take as human, we seem to be in need of an African-American exception.

Three weeks ago the Miami Dolphins guard Richie Incognito, who is white, was reported to have addressed his fellow Dolphin as a “half-nigger.” About a week later, after being ejected from a game, the Los Angeles Clippers forward Matt Barnes, who is black, tweeted that he was “done standing up for these niggas” after being ejected for defending his teammate. This came after the Philadelphia Eagles wide receiver Riley Cooper, who is white, angrily called a black security guard a “nigger” in July.

What followed was a fairly regular ritual debate over who gets to say “nigger” and who does not. On his popular show “Pardon the Interruption,” Tony Kornheiser called on the commissioners of the National Football League, the National Basketball Association and Major League Baseball to ban their players from publicly using the word. The ESPN host Skip Bayless went further, calling “nigger” “the most despicable word in the English language — verbal evil” and wishing that it could “die the death it deserves.”

Mr. Bayless and Mr. Kornheiser are white, but many African-Americans have reached the same conclusion. On Thursday, the Fritz Pollard Alliance Foundation, a group promoting diversity in coaching and in the front offices of the N.F.L., called on players to stop using “the worst and most derogatory word ever spoken in our country” in the locker rooms. In 2007 the N.A.A.C.P. organized a “funeral” in Detroit for the word “nigger.” “Good riddance. Die, n-word,” said Kwame Kilpatrick, then the mayor. “We don’t want to see you around here no more.”

But “nigger” endures — in our most popular music, in our most provocative films and on the lips of more black people (like me) than would like to admit it. Black critics, not unjustly, note the specific trauma that accompanies the word. For some the mere mention of “nigger“ conjures up memories of lynchings and bombings. But there’s more here — a deep fear of what our use of the word “nigger” communicates to white people.

More here.

Nothing at all

by Dave Maier

UniversefromnothingMost philosophical chestnuts leave me cold. Their standard formulations usually have some confusion or preconception or equivocation in there somewhere, so that even when the original puzzle makes sense, the real philosophical action has moved on, often to places unrecognizable to the layman (for good or ill). Take the one about whether, when a tree falls in the forest with no one around to hear, it makes a sound. (According to a recent TV ad, yes, to wit: “Aaaagh! [*wham*] … little help? Anyone? Hello?”) My answer: it depends on what you mean by “sound”; in one sense, yes, in another, no. Both uses of the term are perfectly well established – you just can't use them interchangeably. There are of course some live philosophical questions about perception and reality to keep us busy; this just isn't one of them.

Naturally this isn't enough for some people. The answer is “merely semantic,” and doesn't engage the real mystery of subjects in an objective world. The other questions about perception I mentioned – where for my money the “real action” is – don't give us that same buzz. They're boring, technical, overly analytic. Worse, their focus is disappointingly narrow. Whatever the fate of, say, the doctrine of epistemological disjunctivism, we're a long way from the wonder in which philosophy supposedly begins. Whatever happened, these people ask, to the quest for the True, the Good, and the Beautiful?

My own response to this is that if we just keep our eyes on the true (in inquiry), the good (in action), and the beautiful (in lots of places), then the all-caps TGB (whatever, if anything, they turn out to be) can take care of themselves. But other philosophers – let's call them “naturalists” – take a more actively deflationary line against what they see as mystical obscurantism. If there are any mysteries here, they are scientific mysteries, best answered with the no-nonsense tools of empiricial science; and philosophy's task is not to try to deal with these questions itself, but just to clear the way for science. To do otherwise, according to naturalists, leads to metaphysics – or worse, theology.

If we get all that from just the tree in the forest, imagine what happens when our question is the greatest chestnut of them all: why is there anything at all, instead of nothing? Here a theological answer is so close you can taste it – and whether that taste be yummy or foul, that tends to be what underlies the more contentious answers to our question. To the main combatants, who think it of such monumental importance, there doesn't seem to be any room between naturalism and metaphysics.

I bring this up today not because I have suddenly developed a philosophical interest in this question (that is, the chestnut itself), but instead because I have just begun physicist Lawrence Krauss's 2012 book A Universe From Nothing: Why There Is Something Rather than Nothing, which comes down firmly on the naturalist side, and I'm already not appreciating the characteristic naturalist tendency to run together resistance to naturalism, on the one hand, with creationism/theology/metaphysics (along with right-wing politics and who knows what else) on the other. (Nor do I accept the converse identification, made by the TGB brigade, of resistance to their metaphysical project, on the one hand, with a nihilistic “scientism” on the other.) My kind of philosopher tends to ignore this particular chestnut completely, so it's not surprising that our sensibility is not well represented in these discussions, but I'd like to get a couple of cents in if I may.

Read more »

How the Economists Stole Christmas: Or How Not to Think About Gifts

by Ben Schreckinger

At the end of the week, in the predawn hours that most of us will spend sleeping off turkey and pumpkin pie, millions of Americans will gather in the dark to kick off Black Friday, the annual day-long frenzy of bargain-hunting that marks the beginning of the holiday season. Many economists wish they wouldn't.

Not because Black Friday, in which shoppers literally climb over each other to get at plastic toys and electronic gadgets, is an affront to human dignity. Not because it perpetuates crass materialism. But because, according to an influential strain of economic thinking, the act of gift-giving creates a dead-weight loss. The_Grinch_(That_Stole_Christmas)

The seminal paper in this vein is Joel Waldfogel's “The Deadweight Loss of Christmas,” which goes so far as to estimate — based on interviews of Yale undergrads — that Christmas gifts represent a waste of many billions of dollars annually. Waldfogel's indictment of Christmas presents reads like a wonkier cousin of Jonathan Swift's modest proposal that the Irish eat their own babies — but it's totally sincere.

It was published just in time for Christmas in 1993. The Soviet Union had dissolved on Boxing Day only two years earlier. The market had kicked central planning's butt, which was great news for Americans, and especially great news for American economists. But it turned out to be bad news for Santa, because according to the logic of the market, Christmas is an obstacle to maximum efficiency.

That logic is straightforward: A person has a very good idea of her own needs, and given $100 to spend on herself, she'll spend that money on the things she wants most. But someone else spending $100 on a gift for that person probably has inferior knowledge of that person's preferences, and will buy them something they value less. The better option, then, is to give the recipient $100 and let her spend it for herself.

In 2001, The Economist reexamined the case against gifts and came up with a somewhat more nuanced conclusion. Their analysis elaborates on special cases where a giver might be able to make more efficient use of the money — by giving the recipient what he really wants but won't buy for himself, for example — a possibility that Waldfogel acknowledges. It also stumbles upon the insight that gift-giving itself can give an item sentimental value. In the way that it can sometimes read like The Alien's Guide to Being Human, the magazine advises readers to “Try hard to guess the preferences of each person on your list and then choose a gift that will have a high sentimental value.”

On this line of thinking, indifference curves still offer a useful tool for understanding gift-giving: If we add sentimental value to our model and run the figures again, we might be able to save Christmas after all. But in reality, dead-weight loss and Christmas just don't belong in the same sentence. To understand why, it's helpful to look to the work of UCLA anthropologist Alan Fiske, who's observed that human relationships follow four basic models that correspond to four sets of values: communal sharing, authority ranking, equality matching, and market pricing.

Read more »

Perceptions

48_Tomas-Saraceno_On-Space-Time-Foam-600x400

Tomas Saraceno. On Space Time Foam. HangarBicocca in Milan, Italy, 2012.

“… is a multi-layered habitat of membranes suspended 24 meters above the ground that is inspired by cosmology and life sciences. Each level has a different climate and air pressure and will react to the movement of visitors through it. In a later iteration, the work will become a floating biosphere above the Maldives Islands that is made habitable with solar panels and desalinated water.”

More here, here, and here.

Why you can’t buy a first class ticket to Utopia

by Emrys Westacott

ScreenHunter_411 Nov. 23 14.03Just about every high school would like more money and harder working students. I have a modest proposal to address both problems. In every high school cafeteria let there be two groups—call them, say, “premier” and “regular.” To be in the premier group, students must either pay an additional fifty percent on top of the normal price for a school lunch or be ranked academically in the top five percent of their class. Those in the premier group would enjoy a number of privileges: they queue in their own line, which gives them priority over “regulars” for receiving service; they sit in a separate section at special tables adorned with tablecloths and floral centerpieces; their chairs have padded seats; and they have more choice at the food counter. In addition to the options available to the regular group, they can avail themselves of a complimentary hors d'oeuvre, sparkling water instead of tap water, and an after-lunch coffee or cappuccino (with complimentary chocolate mint). Best of all, perhaps, they enjoy unfiltered internet access.

The benefits of the system should be obvious. The extra revenue generated by the premier group will (among other things) enable the school to offer better food to all while lowering prices for those in the standard group. And students will be inspired to work harder so that they can enjoy premier group privileges, or at least ensure that one day their own kids will do so.

Objections anyone? I can't think of any apart from the thought that the whole scheme is utterly pernicious, likely to breed arrogance on the one side, resentment on the other, and to foster social divisions that subtly fracture the community spirit that ideally would unite all members of the school.

My modest proposal occurred to me the other day when, for the first time, by some inexplicable fluke, I found myself assigned to a first class seat on a jumbo jet flying from Denver to Washington.

Read more »

Lydia Davis’s Proust: The Writer as Translator, the Translator as Writer

by Helane Levine-Keating

“When a foreign classic is retranslated, furthermore, we expect the translator to do something new to justify yet another version. And in raising the bar we might also expect the translator to be capable of describing this newness.” – Lawrence Venuti

Lydia_davis_varieties_of_disturbance_300x300
Lydia Davis, 2013 winner of the Man Booker International Prize, Photo by David Ignaszewski

As Umberto Eco has written in his essay “Borges and My Anxiety of Influence,” “books talk to each other.” And if indeed “books talk to each other,” there is also a conversation—often unspoken—that goes on between fiction writers critics, and translators.

The fiction writer who also translates listens very carefully to the words that are written on the page. They are familiar words—they have influenced her writing for years. Throughout the process, she discusses each choice with the long-deceased writer whom she’s translating.

Proust bedroom

After the words have been strung into sentences, perhaps she dreams of meeting him in his cork-lined bedroom in Paris late at night when he is often wide awake and longing to talk. In the dream she asks him if he likes her translation, if he thinks she’s captured his humor, his particular point of view, his tone of voice. She asks him if she’s nailed the words with the same nails he’s used, more or less, and then she eagerly awaits his answer. A small smile plays on his lips. He coughs for a while, long enough for her foot to fall asleep as she sits cross-legged on the chaise longue near his bed. Finally she asks him again, this time in French, “Est-ce que ma traduction vous plaît ou non?” But in the dream there is no equivalent for “Yes” or “No.”

What does it mean, then, to be both a writer and a translator, who in each role is affected by the whims of the marketplace, the need to make a living, and, by extension, the critics who deem a text worthy or unworthy of being bought and read?

Read more »

Credit where none is due; creationist colleges and courses

by Paul Braterman

ScreenHunter_416 Nov. 25 09.21I am browsing school science textbooks published and marketed by an influential and nationally accredited US university. Here is what I find.[1] Satan wants people to believe in evolution. This is probably the main reason that evolution is so popular. Evolution relies on processes that cannot be observed, therefore it isn’t a scientific theory but depends on faith. The theory of biological evolution is not true because it contradicts the Bible. Many people believe in the evolutionary theory because they feel it eliminates God and lets them do what they want. Evolutionists are constantly finding evidence that runs counter to their claims, but discard it because of bias. The Flood is a better explanation of the fossil record than evolution. Missing links and common ancestors are absent from the fossil record because these organisms never existed. Radiometric dating involves so much guesswork that it is unreliable. Earth Day is the Festival of a false god; but a Christian must be confident that the God who made the world is able to maintain it. And much more in the same vein.

I came across all this rather indirectly. I recently saw a reference to someone, teaching at a non-accredited University in Albuquerque, who described himself as a Fellow of Oxford Graduate School. Having myself, many years ago, tried to become a Fellow of an Oxford college, and dismally failed, I was ready to be impressed. But then it occurred to me that Fellowships are not awarded by Oxford University, but by each of its component colleges. Moreover, despite six years at Oxford and two graduate degrees, I had never heard of the Graduate School as a separate entity. So I decided this was worth looking into. And so it proved. Oxford Graduate School may be of little importance in itself, but it pointed me to a world of absurdities, where a university can only win accreditation by denying scientific reality, where such accreditation is recognised by the US government, and where those at institutions accredited in this way have exerted influence out of all proportion to their numbers.

Oxford Graduate School (OGS), like that place in England where they have been teaching since 1096, has the name “Oxford” in its title, and according to its web site it also calls its doctorate degree D.Phil. rather than Ph.D. And there the resemblance ends.

Read more »

Pavlov’s Belle: Cats with Intent

by Brooks Riley

Kitsch on chair

I, Kitsch

At particularly difficult periods in my life, I study my cat. I’m not a biologist or an anthropomorphologist (okay, maybe I am), but I do occasionally like to read about animal behavior and its human interpreters and interpretations. Much research work examines the intelligence of animals based on tool-usage or communication skills: the New Caledonian crow fashioning hooks out of stiff leaves or twigs to dig up a tasty worm from a hole in a tree stump, chimpanzees doing math, gorillas like Koko using sign language, etc.

One word which suprisingly fails to appear in the scientific literature is ‘intent’. And it is exactly this word which ennables me to decipher the mysteries of my cat.

In the animal world intent can mean the carrying out of instinctive behavior. A squirrel intends to store a nut, a lion intends to attack an antelope, a robin intends to dig up a worm. All of these intentions are hardwired into the ongoing survival apparatus of that animal. In the more intelligent animals, intent consists of more than one level: The New Caledonian crow intends to get that worm, yes, but before he can do so, he has to intend to fashion a tool out of a leaf to facilitate his first intention. This latter intention is intrinsically linked to the first intention, but it is not part of the instinctive level of intent. It is a learned behavior whose origins lie in an exquisite act of deductive reasoning: Some ancestral New Caledonian crow actually thought about the problem of how to reach an out-of-reach worm. And to find a solution, he had to imagine something outside of his own corporeal construction that might facilitate his goal. He put 1 (twig) and 1 (hole) together and came up with 2 (the worm). Then he taught his kids. That his descendents were able to repeat his invention and possibly even incorporate it into their own instinctive behavior doesn’t mitigate the fact that catching a worm in New Caledonia involves two intentions, both based on need, but only one based on original instinct.

But back to my cat. A domestic housebound cat doesn’t use its instinct anymore to hunt for food. If it’s hungry, it has to find another way to get food. And that other way also involves a bifurcation of intent: First it has to get your attention—by yowling, scratching the pristine upholstery, or jumping on you from a great height as you sleep. A different cat might utilize gentler means, such as butting it’s head against your leg or laying a paw on your arm. Whatever the means, she intends to annoy you in order to carry out her first intention, which is to get fed. Her tool, like that of the crow, is a secondary intention meant to ennable the first intention. In Pavlovian terms, you are the dog, she is the bell. Instead of salivating, you rise up zombie-like from your bed and feed the cat.

Read more »

Love in the time of robots

by Thomas Wells

Here-is-a-robotThe robots are coming. Even if they don't actually think, they will behave enough like they do to take over most of the cognitive labour humans do, just as fossil-fuel powered machines displaced human muscle power in the 19th and 20th centuries. I've written elsewhere about the kind of changes this new industrial revolution implies for our political and moral economy if we are to master its utopian possibilities and head off its dystopian threats. But here I want to explore some more intimate consequences of robots moving into the household. Robots will not only be able to do our household chores, but care work, performing the labours of love without ever loving. I foresee two distinct tendencies. First, the attenuation of inter-human intimacy as we have less need of each other. Second, the attractiveness of robots as intimate companions.

Robots will allow us to economise on love

Robots are smartish machines that will soon be able to perform complicated but mundane tasks. They will be, relative to humans, low maintenance, reliable, and tireless. If they cost the same as cars, which doesn't seem implausible, most people will be able to afford at least one. That would effectively provide everyone with command over a full-time personal servant (actually more than full-time since they presumably won't need to sleep). Imagine how much easier life will be with someone else to do all the household chores (an incremental improvement on dishwashers and vacuum cleaners) and also the household care work like potty-training children (a revolutionary improvement). But also, imagine how this may disrupt the political-economy of the 'traditional' household and our dependency on love.

As feminist economists have long pointed out, households are factories in function and corporations in identity. They are factories because they apply human labour and tools to convert inputs like groceries, nappies, houses, etc. into things worth having, like meals, children, homes, etc. They are corporations because they are unified economic units, separated from the individualistic competitive market that operates outside its walls. The individuals who make up a household, like the employees of any firm, are supposed to work together as colleagues to advance the success and prosperity of the corporate 'family' as a whole, rather than to advance their own individual material interests as actors in a market would. Organising production outside of the market in this way makes economic sense in many circumstances, and for the same reasons we have business firms. Using the market comes with transaction costs associated with establishing trust and quality assurance between self-regarding strangers.

Read more »