No one who has ever had a baby chimp climb trustingly into her arms (I have) can doubt that Pan troglodytes is the nearest living relative to Homo sapiens. But how close does that make us? Should we, as some influential scientists and philosophers have argued, see chimpanzees as fellow humans and accord them the same rights? Or does science support the assumption of human uniqueness that has prevailed through most of civilisation?
As his title suggests, Jeremy Taylor is firmly in the second camp. A TV science producer, he was driven to distraction by a stream of documentaries showing cute chimps displaying apparently human traits such as empathy, tool-making and language. Egged on by a tendency to anthropomorphism, some primatologists have argued that the difference between chimpanzee and human cognition is simply a matter of degree. Taylor is having none of it: “To call the difference quantitative between alarm calls, food-specific grunts, whoops and Shakespeare; between night nests and twig tools, and the A380 passenger jet; and between retribution and food sharing and Aristotle and Mill is, to my mind, stretching a point, and a bit of an insult to human ingenuity and culture.”
At a superficial level, science supports the chimps 'R' us hypothesis. In 2005, scientists published a letter-by-letter comparison of the chimpanzee and human genomes, with an overall similarity of 96%. For comparison, studies of the human genome now put the similarity between any two individuals at between 97% and 99%.
The poet Constantine Cavafy was a cosmopolitan by both birth and inclination. His parents were Constantinople Greeks of what was then known as “good family”; by the time their youngest son was born in 1863, they were settled in Alexandria, Egypt, prosperous pillars of a thriving community. But after his father’s death in 1870, the family fortunes failed and Cavafy’s mother took her sons to live for a few years near her late husband’s relatives in Liverpool and London. (It’s said that afterward Cavafy’s Greek retained a faint English inflection.) The dimly remembered life of parties and servants was gone; in the early 1880s the British bombardment of Alexandria destroyed the family home. By the time the novelist E.M. Forster met Cavafy in 1918, he was living in a small apartment on the run-down Rue Lepsius. Alexandria, wrote Forster, “founded upon cotton with the concurrence of onions and eggs,” was “scarcely a city of the soul.”
Now, in 2009, it’s a version of the same story. Gates is once again regarded with suspicion because, as the cultural critic Michael Eric Dyson put it in an interview, he has committed the crime of being H.W.B., Housed While Black. He isn’t the only one thought to be guilty of that crime. TV commentators, laboring to explain the unusual candor and vigor of Obama’s initial comments on the Gates incident, speculated that he had probably been the victim of racial profiling himself. Speculation was unnecessary, for they didn’t have to look any further than the story they were reporting in another segment, the story of the “birthers” — the “wing-nuts,” in Chris Matthews’s phrase — who insist that Obama was born in Kenya and cite as “proof” his failure to come up with an authenticated birth certificate. For several nights running, Matthews displayed a copy of the birth certificate and asked, What do you guys want? How can you keep saying these things in the face of all evidence?
He missed the point. No evidence would be sufficient, just as no evidence would have convinced some of my Duke colleagues that Gates was anything but a charlatan and a fraud. It isn’t the legitimacy of Obama’s birth certificate that’s the problem for the birthers. The problem is again the legitimacy of a black man living in a big house, especially when it’s the White House. Just as some in Durham and Cambridge couldn’t believe that Gates belonged in the neighborhood, so does a vocal minority find it hard to believe that an African-American could possibly be the real president of the United States.
Gates and Obama are not only friends; they are in the same position, suspected of occupying a majestic residence under false pretenses. And Obama is a double offender. Not only is he guilty of being Housed While Black; he is the first in American history guilty of being P.W.B., President While Black.
On a chilly evening in late May, hundreds of Porto Alegre, Brazil, residents packed into the Cecores gymnasium of the working-class neighborhood of Restinga for their yearly regional Participatory Budgeting (PB) assembly. Mayor José Fogaça and his PB team sat before them at long tables. This year marked the 20th anniversary of the process in this southern city. The lively crowd cheered and waved banners. Residents spoke in support of their needs, or denounced the government for not fulfilling promises it had made. “Housing” was on the lips of many.
“I struggled. I’m proof of this,” said Fabiana dos Santos Nacimento, a mother of six, who won her own home through the PB process a decade ago. “I waited six or seven years to acquire my home. And now my daughters are here and I’m struggling to help them acquire a home next door.”
More than 750 residents voted housing as this year’s third most important priority, behind social assistance and roads. During the last decade and a half, thousands of working families with the National Movement for the Struggle of Housing (MNLM for Movimento Nacional de Luta pela Moradia) have won homes through participatory budgeting in this region alone.
The assembly was just one of 23 that occur in Porto Alegre every fall. At the assemblies, neighborhood residents participate in the allocation of city funds by prioritizing needs, proposing future government projects and electing neighborhood delegates and councilpersons to carry out their decisions throughout the year.
Renowned macroeconomist Paul Romer (perhaps the economist most responsible for getting the literature on endogenous growth up and running) has resigned from Berkeley to help start up a new institute dedicated to changing how we think about sovereignty, so as to make it easier both for countries to borrow rule-sets from each other, and perhaps to allow other countries to actively administer parts of their own territory. Romer notes
…we [should] rethink sovereignty (respect borders, but maybe create new systems of administrative control); rethink citizenship (allowing perhaps for voice without residency as well as residency without voice); and rethink scale (instead of focusing on nations, focus on new cities.) If nations are willing to experiment along these lines, they can create new places, places that can give more people access to the kind of rules that they would like to live and work under, and places that can sustain the historical process of entry and innovation in national systems of rules.
This is not only not as strange as it sounds, but actually has some considerable empirical precedent, as Alex Cooley and Hendrik Spruyt discuss in their new book, Contracting States (Amazon). State sovereignty is much more frequently abrogated than we think, and many states effectively control bits and pieces of other states’ territories. Sovereignty is not the single unitary phenomenon that it is often taken for, but instead a “a bundle of rights and obligations that are dynamically exchanged and transferred between states.”
Via the Daily Dish, Jacob Sullum over at Reason’s Hit & Run:
Indiana lawyer Joshua Claybourn notes that the Henry Louis Gates affair (Gatesgate?) highlights the threat to civil liberties posed by laws prohibiting “disorderly conduct,” the offense for which Gates was arrested. In Massachusetts, a person is deemed “disorderly,” and therefore subject to a jail term of up to six months, if he
1) “engages in fighting or threatening, violent or tumultuous behavior,” or
2) “creates a hazard or physically offensive condition by any act which serves no legitimate purpose”
3) “with purpose to cause public inconvenience, annoyance or alarm,” or
4) “recklessly creates a risk thereof.”
Claybourn (who, for what it’s worth, is skeptical of Gates’ charges of racism) says:
This sort of definition is…similar to that found in most states, and in almost [every]instance it is fraught with vagaries, giving far too much discretion to police officers. In short, “disorderly conduct” can easily become a euphemism for whatever a particular police officer doesn’t like. That kind of environment runs counter to fundamental ideals of the American system.
The danger of such discretion is clear from the report on Gates’ arrest. Sgt. James Crowley, the Cambridge police officer who arrested Gates at his home after responding to an erroneous burglary report, claims the Harvard professor’s complaints and charges of racism amounted to “tumultuous behavior” that recklessly created a risk of “public inconvenience, annoyance or alarm.” How so?
UPDATE: Via billy in the comments, Henry Farrell has posted a defense of the discretionary power of police by Brandon del Pozo, a captain in the NYPD and a Ph.D. candidate in philosophy at CUNY, over at Crooked Timber. To me there is some merit in de Pozo’s arguments for discretionary power, which does not mean I’m totally convinced. I’m far from convinced that Sergeant Crowley used that discretionary power wisely, and am inclined to think he used it unwisely.
From my own experience and what I have learned about the incident, I highly doubt that I would have ordered the arrest of Professor Gates for any charge. I do, however, think that based on his actions as alleged by Sergeant Crowley, his arrest was somewhat plausible within the universe of possible outcomes to the incident. That still does not mean that the cops in question weren’t acting “stupidly,” as President Obama suggested. It is possible to do a lawful thing that is stupid, and that is why officers have discretion in many cases. While it can be misused, discretion is there to prevent them from stupidly enforcing the letter of the law. That the arrest was unwise and imprudent has also been made clear by how quickly the charges were dropped and the apologies issued by the government of Cambridge.
On the other hand, I do feel that Professor Gates seems to have acted inappropriately. There was no good reason for him to converse belligerently with the responding officer from his first words, or accuse him of racism, or refuse to answer basic questions directly related to the scope of the officer’s legitimate investigation. Of course, Gates also had the prerogative to say nothing at all, but this is different from saying nothing constructive, and instead issuing verbal abuse.
“Daddy, why did Jesus invent butterflies if they die after two weeks?”
I just about hit the panic button when my six-year-old son Theo put this question to me not long ago. His mother, who is a Christian, had taught him that Jesus was God. When Jesus's visage appears in a painting or on television, Theo sometimes exclaims, “That's God!” In his butterfly question he seemed to reason, syllogistically, that if Jesus was God, and God created the world and its life forms (butterflies being one of them), Jesus “invented” the winged creatures. Either that or God and Jesus are simply interchangeable in his mind.
“First, Theo, your question presumes that Jesus was God,” I responded. “Many people, like mommy, believe he was, but many others don't. It also presumes that there is a God – we don't know for sure that there is.” “I think there is,” he retorted. “There may very well be a God, Theo. But not everyone agrees on that – there are many people who doubt there is a God. We might never know for sure if there is or not,” I told him. “When we die we'll know,” he came back. “Maybe,” I said. “But maybe not.”
The literalism packed into Theo's question alarmed me, but this was by no means my first encounter with the influence of religion on my progeny. My ten-year-old son Elijah enjoys going to church with his mother – not every Sunday, but not infrequently. I've never discouraged it. One Monday morning a few months ago, though, I saw him reading the Bible, a children's Bible he'd been given at his mother's church. In no way did I discourage him from reading it. But I confess (as it were) that I went to work that day a bit preoccupied.
The first thing you need to know about Goldman Sachs is that it’s everywhere. The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money. In fact, the history of the recent financial crisis, which doubles as a history of the rapid decline and fall of the suddenly swindled dry American empire, reads like a Who’s Who of Goldman Sachs graduates.
By now, most of us know the major players. As George Bush’s last Treasury secretary, former Goldman CEO Henry Paulson was the architect of the bailout, a suspiciously self-serving plan to funnel trillions of Your Dollars to a handful of his old friends on Wall Street. Robert Rubin, Bill Clinton’s former Treasury secretary, spent 26 years at Goldman before becoming chairman of Citigroup — which in turn got a $300 billion taxpayer bailout from Paulson. There’s John Thain, the asshole chief of Merrill Lynch who bought an $87,000 area rug for his office as his company was imploding; a former Goldman banker, Thain enjoyed a multibilliondollar handout from Paulson, who used billions in taxpayer funds to help Bank of America rescue Thain’s sorry company. And Robert Steel, the former Goldmanite head of Wachovia, scored himself and his fellow executives $225 million in goldenparachute payments as his bank was selfdestructing. There’s Joshua Bolten, Bush’s chief of staff during the bailout, and Mark Patterson, the current Treasury chief of staff, who was a Goldman lobbyist just a year ago, and Ed Liddy, the former Goldman director whom Paulson put in charge of bailedout insurance giant AIG, which forked over $13 billion to Goldman after Liddy came on board. The heads of the Canadian and Italian national banks are Goldman alums, as is the head of the World Bank, the head of the New York Stock Exchange, the last two heads of the Federal Reserve Bank of New York — which, incidentally, is now in charge of overseeing Goldman — not to mention …
I’d meant to post this a while ago; I think Chris Schoen mentioned it a few weeks ago as well. Sharon Begley in Newsweek:
Over the years these arguments [that that “rape is…an adaptation, a trait encoded by genes that confers an advantage on anyone who possesses them”] have attracted legions of critics who thought the science was weak and the message (what philosopher David Buller of Northern Illinois University called “a get-out-of-jail-free card” for heinous behavior) pernicious. But the reaction to the rape book was of a whole different order. Biologist Joan Roughgarden of Stanford University called it “the latest ‘evolution made me do it’ excuse for criminal behavior from evolutionary psychologists.” Feminists, sex-crime prosecutors and social scientists denounced it at rallies, on television and in the press.
Among those sucked into the rape debate that spring was anthropologist Kim Hill, then Thornhill’s colleague at UNM and now at Arizona State University. For decades Hill has studied the Ache, hunter-gatherer tribesmen in Paraguay. “I saw Thornhill all the time,” Hill told me at a barbecue at an ASU conference in April. “He kept saying that he thought rape was a special cognitive adaptation, but the arguments for that just seemed like more sloppy thinking by evolutionary psychology.” But how to test the claim that rape increased a man’s fitness? From its inception, evolutionary psychology had warned that behaviors that were evolutionarily advantageous 100,000 years ago (a sweet tooth, say) might be bad for survival today (causing obesity and thence infertility), so there was no point in measuring whether that trait makes people more evolutionarily fit today. Even if it doesn’t, evolutionary psychologists argue, the trait might have been adaptive long ago and therefore still be our genetic legacy. An unfortunate one, perhaps, but still our legacy. Short of a time machine, the hypothesis was impossible to disprove. Game, set and match to evo psych.
Or so it seemed. But Hill had something almost as good as a time machine. He had the Ache, who live much as humans did 100,000 years ago. He and two colleagues therefore calculated how rape would affect the evolutionary prospects of a 25-year-old Ache. (They didn’t observe any rapes, but did a what-if calculation based on measurements of, for instance, the odds that a woman is able to conceive on any given day.) The scientists were generous to the rape-as-adaptation claim, assuming that rapists target only women of reproductive age, for instance, even though in reality girls younger than 10 and women over 60 are often victims. Then they calculated rape’s fitness costs and benefits. Rape costs a man fitness points if the victim’s husband or other relatives kill him, for instance. He loses fitness points, too, if the mother refuses to raise a child of rape, and if being a known rapist (in a small hunter-gatherer tribe, rape and rapists are public knowledge) makes others less likely to help him find food. Rape increases a man’s evolutionary fitness based on the chance that a rape victim is fertile (15 percent), that she will conceive (a 7 percent chance), that she will not miscarry (90 percent) and that she will not let the baby die even though it is the child of rape (90 percent). Hill then ran the numbers on the reproductive costs and benefits of rape. It wasn’t even close: the cost exceeds the benefit by a factor of 10. “That makes the likelihood that rape is an evolved adaptation extremely low,” says Hill. “It just wouldn’t have made sense for men in the Pleistocene to use rape as a reproductive strategy, so the argument that it’s preprogrammed into us doesn’t hold up.”
‘There’s a certain slant of light, / On winter afternoons” wrote Emily Dickinson, “That oppresses, like the Heft / Of cathedral tunes.” And although humans have developed nature-defying central heating and electric light bulbs since the American poet wrote those lines, many members of our species are still prone to the winter blues, or Seasonal Affective Disorder (SAD). As the weather gets colder and daylight hours dwindle, SAD sufferers feel their energy levels slumping. They can become depressed, lethargic and crave sweet or starchy foods. For despite our cocooning technology we are still, like all life on earth, subjects of the seasons.
In Rhythms of Life (2004), their first book on the fascinating chronobiology that ticks away inside every living cell, Russell Foster (professor of circadian neuroscience at Oxford) and Leon Kreitzman (broadcaster and author of The 24 Hour Society) explained the circadian rhythms generated by the Earth’s 24-hour revolution on its axis. They took us through the science behind flowers opening their petals in the day and folding them up at night, just as we wake and sleep. They advised that the best time of day for giving an impressively firm handshake is around 6pm and the best time for giving birth is between 4am and 6am. Now they’re taking in the bigger picture, pointing out that just as all creatures have an internal, 24-hour clock, so we also have an internal calendar governed by Earth’s 365-day rotation around the sun.
Are geeks guilty of groupthink? A network expert argues that less social networking would produce more radical innovation on the Internet. “An overabundance of connections over which information can travel too cheaply can reduce diversity, foster groupthink, and keep radical ideas from taking hold,” Viktor Mayer-Schönberger, director of the Information + Innovation Policy Research Center at the National University of Singapore, writes in this week's issue of the journal Science. That may be one of the reasons why much of the open-source software currently being produced is rarely altered in anything more than an incremental manner, Mayer-Schönberger says. “The basic point that I'm trying to make is … how do we get to the next stage of the Internet, the new-generation Internet, the radical innovation, rather than another dot release on the Firefox browser?” he told me today.
Question: Your last book, The Hermeneutic Nature of Analytic Philosophy: A Study of Ernst Tugendhat, centered on the German philosopher in order to dismiss the division of philosophy into the analytic and continental schools, while in this new book you seem to engage in a strictly ontological issue: “What remains of Being after the deconstruction of metaphysics?” What is the difference between both books? What is the goal now?
Santiago Zabala: I don’t think there is a big difference since they both engage in what has become the most important problem for philosophy since Heidegger: how can metaphysics be overcome? While in the first book I gave an answer through the postmetaphysical thought of Tugendhat, in this new book I confront the problem at its root, that is, through the concept of Being. Although in this new book I include a whole section on Tugendhat (as well as sections on Jacques Derrida, Reiner Schürmann, Jean-Luc Nancy, Hans-Georg Gadamer, and Gianni Vattimo), its purpose to expose the remnants of Being in Tugendhat’s philosophy, which shows the continuity between both investigations. In sum, the goal of this book is to expose the remains of Being after Heidegger’s destruction of metaphysics in contemporary philosophy. The greatest achievements of this destruction are, first, the revelation that Being has always been described as a present object in its presentness and, second, the realization that it is not possible to definitively overcome this objective interpretation without falling back into another descriptive interpretation. In this condition, where metaphysics cannot be “überwinden,” (overcome, meaning a complete abandonment of the problem) but can only be “verwinden” (surpassed, alluding to the way one surpasses a major disappointment not by forgetting it but by coming to terms with it) it is necessary to start interpreting Being through its remains, which is a concept that maintains metaphysics in such a way to also overcome it.
Why are we so fascinated by Danton? Perhaps because we know so little about him. We know he was a powerful orator, and popular on the streets of revolutionary Paris. We know that, with the enemy at the gates, he made stirring speeches urging the French to save their revolution by being tirelessly bold. We know, above all, that in the spring of 1794 he questioned whether a policy of Terror was doing more harm than good, and paid the price by being guillotined himself. He thus died a martyr to humanity, struck down by his polar opposite, the frigid and inflexible Robespierre. These are the elements of a legend that began in the 1830s with Georg Büchner’s play Danton’s Death, and was taken up by historians, particularly those writing in English, from Thomas Carlyle onwards. If only he had prevailed, the bloody climax of the Terror might have been avoided! Lesser and meaner men brought him down. In France, too, Danton had his fervent liberal advocates, but in the twentieth century the admirers of Robespierre gained the upper hand in the historical profession. They largely accepted the accusations great and small thrown at Danton at his show trial: he was venal, greedy, immoral, a political trimmer, a closet royalist and even perhaps a traitor. Much of the historical evidence comes down sooner or later to what historians think of these accusations, themselves scraped hastily together by Robespierre once he decided that Danton had to go. Corroborative evidence is remarkably scanty and almost always ambiguous. Danton seldom wrote anything down, least of all his famous speeches, and at crucial moments he had a habit of disappearing. Most of the secondary information comes from an increasingly intimidated revolutionary press, or later recollections by third parties with their own records to protect.
In 1815, Cardinal Angelo Mai made an extraordinary discovery in the Ambrosian Library in Milan. He spotted that a book containing the records of the First Church Council of Chalcedon in ad 451 had been made out of reused parchment. The earlier writing on each sheet had been erased (washing with milk and oat-bran was the common method), and the minutes of the Church Council copied on top. As often in reused documents of this kind, the original text had begun to show through the later writing, and was in part legible. It turned out that the recycled sheets had come from a very mixed bag of books. There was a single page of Juvenal’s Satires, part of Pliny’s speech in praise of Trajan (the Panegyric) and some commentary on the Gospel of St John. But the prize finds, making up the largest part of the book, were faintly legible copies of the correspondence of Marcus Cornelius Fronto, one of the leading scholars and orators of the second century ad, and tutor to the future emperor Marcus Aurelius, who reigned from 161 to 180. The majority of the letters in the palimpsest were between Fronto and Marcus Aurelius himself, both before and after he had ascended to the throne. Unlike the passages from Juvenal and Pliny, these were entirely new discoveries.
One hundred years ago this month, Marion Wallace-Dunlop (1864–1942) became the first modern hunger striker. She came to her prison cell as a militant suffragette, but also as a talented artist intent on challenging contemporary images of women. After she had fasted for ninety-one hours in London’s Holloway Prison, the Home Office ordered her unconditional release on July 8, 1909, as her health, already weak, began to fail. Her strike influenced those of Mohandas Gandhi, James Connolly and others who followed her example. Thousands of other strikes have moved the practice in new directions, but we should acknowledge its originator. Students and scholars of the women’s suffrage movement know Wallace-Dunlop’s name, some of her protests, and the main events of her life, but her art and writings are almost entirely unknown. Recently discovered works of hers reveal a mind that knew how images are read, how stories are made and publicity generated. Along with the materials released in 2005 through the British Freedom of Information Act (2000), Wallace-Dunlop’s art and writings, along with her prints, sketches, letters and photos, provide a more complete genealogy of the hunger strike, and show a woman challenging the aesthetic and gender boundaries of her day. Her oil portrait of her sister Constance (“Miss C. W. D.”, 1892) portrays a woman with a shawl wrapped around her shoulders, who sits erect, alarmed, with a tinge of fear, and stares disturbingly out at the viewer.
Bookstores are getting shipments of a significantly changed edition of Ernest Hemingway’s masterpiece, “A Moveable Feast,” first published posthumously by Scribner in 1964. This new edition, also published by Scribner, has been extensively reworked by a grandson who doesn’t like what the original said about his grandmother, Hemingway’s second wife.
The grandson has removed several sections of the book’s final chapter and replaced them with other writing of Hemingway’s that the grandson feels paints his grandma in a more sympathetic light. Ten other chapters that roused the grandson’s displeasure have been relegated to an appendix, thereby, according to the grandson, creating “a truer representation of the book my grandfather intended to publish.”
It is his claim that Mary Hemingway, Ernest’s fourth wife, cobbled the manuscript together from shards of an unfinished work and that she created the final chapter, “There Is Never Any End to Paris.”
Scribner’s involvement with this bowdlerized version should be examined as it relates to the book’s actual genesis, and to the ethics of publishing.