Neuroscientists: We Don’t Really Know What We Are Talking About, Either

Brain-dunce-cap-300x283Ferris Jabr over at the Scientific American blog:

At a surprise April 1 press conference, a panel of neuroscientists confessed that they and most of their colleagues make up half of what they write in research journals and tell reporters. “We’re always qualifying our conclusions by reminding people that the brain is extremely complex and difficult to understand—and it is,” says Philip Tenyer of Harvard University, “but we’ve also been a little lazy. It is just easier to bluff our way through some of it. That’s one perk of being a respected neuroscientist—you can pretty much say whatever you want about the brain because so few people, including other neuroscientists, understand what you’re talking about in the first place. As long as you throw in enough jargon, it sounds science-y and legit and stuff.”

“It’s not just what we write in our studies,” explains Stephanie Sigma of Stanford University. “It’s a lot of the pretty pictures, too. You know those images with captions claiming that certain brain regions ‘light up’ like the fourth of July? I mean, come on. Most of the participants in these studies are college freshmen who only enrolled in Intro Psychology to satisfy a mandatory academic requirement. There is only one thing they know how to ‘light up’—and it’s not their brains. Frankly, we were just hoping that the colorful images would keep people’s attention. People like pretty pictures—that is something we’ve shown in our studies. Although I can’t quite remember if that was one of the findings we made up or not…”

People who read a lot of neuroscience news have probably noticed several consistent contradictions, says Laura Sulcus of Dartmouth College. “Some studies say that different brain regions work in concert to perform a single complex task, whereas other studies argue that a particular cognitive function—such as recognizing faces—is basically the sole domain of one region. The thing is, just because one part of the brain shows more activity than another, it doesn’t mean that it is the only piece involved. But it is just so easy to pick a neglected area, dress it up with some colorful fMRI studies and present it to the world as a distinct, functional region of the brain. How can we resist?…”

White Until Proven Black: Imagining Race in Hunger Games

Rue-hunger-gamesAnna Holmes in The New Yorker [h/t: Linta Varghese]:

On Tuesday, February 28th, a twenty-nine-year-old Canadian male fan of Suzanne Collins’s dystopian young adult trilogy, “The Hunger Games,” logged onto the popular blogging platform Tumblr for the first time and created a site he called Hunger Games Tweets. The young man, whom I’ll call Adam, had been tracking a disturbing trend among Hunger Games enthusiasts: readers who could not believe—or accept—that Rue and Thresh, two of the most prominent and beloved characters in the book, were black, had been posting vulgar racial remarks.

Adam, who read and fell in love with the trilogy last year, initially encountered these sorts of sentiments in the summer of 2011, when he began visiting Web sites, forums, and message boards frequented by the series’s fans, who were abuzz with news about the film version of the book. (The movie, released a week ago today, made a staggering $152.5 million during its first three days of release.) After an argument broke out in the comments section of an Entertainment Weekly post that suggested the young black actress Willow Smith be cast as the character of Rue, he realized that racially insensitive remarks by “Hunger Games” fans were features, not bugs. He soon began poking around on Twitter, looking at tweets that incorporated hashtags—#hungergames—used by the book’s devotees. Like the conversations found on message boards, some of the opinions were vitriolic, if not blatantly racist; unlike the postings on fan forums, however, the Twitter comments were usually attached to real identities.

“Naturally Thresh would be a black man,” tweeted someone who called herself @lovelyplease.

“I was pumped about the Hunger Games. Until I learned that a black girl was playing Rue,” wrote @JohnnyKnoxIV.

“Why is Rue a little black girl?” @FrankeeFresh demanded to know. (she amended her tweet with the hashtag admonishment #sticktothebookDUDE.)

Adam was shocked—Suzanne Collins had been fairly explicit about the appearance, if not the ethnicity, of Rue and Thresh, who, along with twenty-two other kids, are thrown into the life-or-death, Lord of the Flies-esque battle that the book is named for.

The Mastery of Non-Mastery

Tumblr_m1omardgY11qhwx0oJennifer Wallace on Michael Taussig’s I Swear I Saw This, in the Los Angeles Review of Books:

There are two types of anthropologists: One models himself on the scientist, treating the world as his laboratory, people as his raw data. He mounts surveys, crunches numbers, and, crucially, remains detached and dispassionate throughout the process. He applies for big research grants with “expected outcomes” and “anticipated impact” carefully delineated long before he has gone out into the field. The other kind of anthropologist is more like a religious initiate, participating fully in the culture in which he is placed and intimating that he is then the possessor of some secret knowledge. Like an initiate, he cannot anticipate any “outcomes” before they happen but must simply live in the moment and immerse himself in the local customs and values.

It is this latter tradition of which Michael Taussig, an eminent professor at Columbia University, is one of the greatest exponents. The New York Times has called his work “gonzo anthropology.” He has drunk hallucinatory yagé on the sandy banks of the Putumayo River. He’s cured the sick with the aid of spirits. He’s escaped from guerrillas in a dugout canoe at dawn. Above all, he is interested in individual stories and experiences, unique tales that cannot be reduced to rational explanation or bland report. To read Taussig is to have an adventure in which one can move from Walter Benjamin’s experiments with hashish to American kids’ drawings to that dawn-lit canoe without skipping a beat. His narrative is lyrical, mesmeric.

At the center of Taussig’s method is the anthropologist’s desire to bear witness to what he cannot understand. Meditating on his sketch and notes, Taussig imbues the event with the magical aura of a collector’s gem. Was it chance or fate that brought them together in that tunnel? And was it chance or fate that transfigured the relatively common scene into something haunting and extraordinary?

The Center of the Rebellion

Seneca Falls NY May 2011_Womens Rights Natl Hist Park early supportter statues 2Lynne Weiss over at the always interesting “Dispatches” section of The Common:

Seneca Falls has a much bigger place in history than it does in geography. It is usually mentioned only as the location of the 1848 Women’s Rights Convention, famously organized by women’s rights crusader, Elizabeth Cady Stanton. So rarely is it mentioned in any other context that one might think it did not exist before or after that event. It’s a small town, much like many other old mill towns in New England and upstate New York, and seems an unlikely setting for, as Stanton called her farmhouse home, “The Center of the Rebellion.” (Stanton was proud of having kept her birth name–Cady–after she married, but for purposes of brevity I call her Stanton here.)

Ironically, it was because Seneca Falls was so humdrum that Stanton was driven to organize a convention. Before they moved to Seneca Falls in 1847, Stanton lived in Boston, where she and her husband Henry entertained leading thinkers and writers—William Lloyd Garrison, Lydia Maria Child, Frederick Douglass, Bronson Alcott, John Greenleaf Whittier, Margaret Fuller, Ralph Waldo Emerson, and Nathaniel Hawthorne. Stanton attended plays, concerts, and lectures. She had maids and nurses to help care for her seven children. She and Henry moved to Seneca Falls for the sake of Henry’s political career, but when he failed to win elected office, he became a political journalist, spending nearly all his time in Albany and Washington, D.C., leaving Stanton to her own devices.

Gershwin Writ Small

Horowitz_247524hJoseph Horowitz on the controversial production of Porgy and Bess now on Broadway, in the TLS:

Porgy and Bess – with music by George Gershwin, a book by DuBose Heyward, and lyrics by Heyward and Ira Gershwin – split opinion when it opened on Broadway in 1935. No American could respond without prejudice to a black opera by a Brooklyn Jew with roots in Tin Pan Alley. Only immigrants and foreigners found it possible to acclaim Gershwin without patronizing him. A Broadway revival in 1942, recasting the opera as a musical, was more successful. In the 1950s and 60s, Porgy and Bess was little performed in the United States; its depiction of an impoverished African American courtyard community was considered demeaning. From 1976, a widely seen Houston Grand Opera production revalidated Porgy and Bess and proved its operatic mettle. A production at the Metropolitan Opera in 1985 was a ponderous failure.

The new Porgy and Bess is nothing if not boldly conceived. In 1942, five years after Gershwin’s death, his recitatives were replaced by dialogue, and cast and orchestra were greatly reduced in strength. Paulus and company have done that and more. We have new speeches, new harmonies, new accompaniments, even virtually new numbers. “Summertime” is a duet. “It take a long pull to get there” is a male vocal quartet distending Gershwin’s pithy fisherman’s tune. Both pit and stage are substantially amplified.

There can be no such thing as a Gershwin purist. It is part of his genius that he cannot be categorized. The cultural fluidity of Porgy and Bess – of Gershwin, generally – is such that he is also interpretively fluid. Stravinsky insisted that his music should not be interpreted, whereas with Gershwin, interpretation is both necessary and irresistible. Rhapsody in Blue has no definitive score or length. The Concerto in F can be sentimental or sec, “Russian” or “French”. The first recordings of Porgy’s songs range in style from the operatic largesse of Lawrence Tibbett’s humbling “Oh Bess, oh where’s my Bess?” (1935) to Avon Long’s swinging “I got plenty o’ nuttin’” with the Leo Reisman Orchestra (1942). There will never be an “authentic” Porgy and Bess.

AQuantum Theory of Mitt Romney

01QUANTUM2-popupDavid Javerbaum in the NYT:

Before Mitt Romney, those seeking the presidency operated under the laws of so-called classical politics, laws still followed by traditional campaigners like Newt Gingrich. Under these Newtonian principles, a candidate’s position on an issue tends to stay at rest until an outside force — the Tea Party, say, or a six-figure credit line at Tiffany — compels him to alter his stance, at a speed commensurate with the size of the force (usually large) and in inverse proportion to the depth of his beliefs (invariably negligible). This alteration, framed as a positive by the candidate, then provokes an equal but opposite reaction among his rivals.

But the Romney candidacy represents literally a quantum leap forward. It is governed by rules that are bizarre and appear to go against everyday experience and common sense. To be honest, even people like Mr. Fehrnstrom who are experts in Mitt Romney’s reality, or “Romneality,” seem bewildered by its implications; and any person who tells you he or she truly “understands” Mitt Romney is either lying or a corporation.

Nevertheless, close and repeated study of his campaign in real-world situations has yielded a standard model that has proved eerily accurate in predicting Mitt Romney’s behavior in debate after debate, speech after speech, awkward look-at-me-I’m-a-regular-guy moment after awkward look-at-me-I’m-a-regular-guy moment, and every other event in his face-time continuum.

The basic concepts behind this model are:

Complementarity. In much the same way that light is both a particle and a wave, Mitt Romney is both a moderate and a conservative, depending on the situation (Fig. 1). It is not that he is one or the other; it is not that he is one and then the other. He is both at the same time.

Probability. Mitt Romney’s political viewpoints can be expressed only in terms of likelihood, not certainty. While some views are obviously far less likely than others, no view can be thought of as absolutely impossible. Thus, for instance, there is at any given moment a nonzero chance that Mitt Romney supports child slavery.

Dear Don Draper, It’s a Wonderful Life

Adam Wilson in The Paris Review:

AshtrayDear Don Draper,

Birthday greetings from the year 2012! Adam Wilson here, writing to tell you that things will be okay!

I know life looks bleak right now, Don. You just turned forty. You’re feeling it. Your frown lines tell the tale, your smoke-seasoned cheek skin, the whiskey jaundice blooming in your beautiful eyes. The way your manly body slumps and crumples, finally flaccid after decades of tumescence. It’s 1966 and everything’s orange and yellow, plush and furry, groovy, heady, already psychedelically aglow. At the end of last season you were smiling like a lobotomized monkey, gaga over Megan the secretarial sex machine, offering love and financial security in exchange for a peek at her abs. Now you’ve got the spoils of that horny dream and it’s not a pretty sight: an open plan apartment accented by white rugs and cream-colored decorative pillows; a wife whose sexual liberation extends outside your bedroom and into the public salon where she’ll embarrass you in front of your coworkers, strutting her silky stuff while a band of blond surf bros play anesthetized hippie pop; daughter Sally quickly turning Lolita; your son Bobby all but unrecognizable from last year (it’s not your fault—they changed the actor); baby Gene with his creepy, beady eyes; plus the possibility of even more unwanted children!

Don’t worry, buddy. It gets better. You know how, in It’s a Wonderful Life, that angel arrives to show Jimmy Stewart the future and convince him not to kill himself? I’m that angel, Don. And I’m telling you to quit smoking and slow down with the drinking, and maybe get some exercise and cut carbs, because the future’s coming—and you’re gonna like it. The quitting smoking part’s tough, I know. I’m going through it now myself. I’m on day two, and I can feel the missing nicotine like a great void at the center of my being. My fingers twitch and my armpits drip. I’m itchy and irritated and finding it hard to focus. But we can do this together, Don. We have to. Because frankly I’m not sure I can go through with this quitting thing if I have to watch you every week, guiltlessly enjoying your cigarettes, blowing billows of smoke into the stale office air.

More here.

The fall of the Roman empire and the rise of Islam

Tom Holland in The Guardian:

Medusa-head-008Whenever modern civilisations contemplate their own mortality, there is one ghost that will invariably rise up from its grave to haunt their imaginings. In February 1776, a few months after the publication of the first volume of The Decline and Fall of the Roman Empire, Edward Gibbon commented gloomily on the news from America, where rebellion against Britain appeared imminent. “The decline of the two empires, Roman and British, proceeds at an equal pace.” Now, with the west mired in recession and glancing nervously over its shoulder at China, the same parallel is being dusted down. Last summer, when the Guardian's Larry Elliott wrote an article on the woes of the US economy, the headline almost wrote itself: “Decline and fall of the American empire”.

Historians, it is true, have become increasingly uncomfortable with narratives of decline and fall. Few now would accept that the conquest of Roman territory by foreign invaders was a guillotine brought down on the neck of classical civilisation. The transformation from the ancient world to the medieval is recognised as something far more protracted. “Late antiquity” is the term scholars use for the centuries that witnessed its course. Roman power may have collapsed, but the various cultures of the Roman empire mutated and evolved. “We see in late antiquity,” so Averil Cameron, one of its leading historians, has observed, “a mass of experimentation, new ways being tried and new adjustments made.” Yet it is a curious feature of the transformation of the Roman world into something recognisably medieval that it bred extraordinary tales even as it impoverished the ability of contemporaries to keep a record of them. “The greatest, perhaps, and most awful scene, in the history of mankind”: so Gibbon described his theme. He was hardly exaggerating: the decline and fall of the Roman empire was a convulsion so momentous that even today its influence on stories with an abiding popular purchase remains greater, perhaps, than that of any other episode in history. It can take an effort, though, to recognise this. In most of the narratives informed by the world of late antiquity, from world religions to recent science-fiction and fantasy novels, the context provided by the fall of Rome's empire has tended to be disguised or occluded.

More here.

Sunday Poem

This One and That One

This one and That one and the Other have families
that are happy and solid, children, grandchildren
even great-grandchildren, who are blonde and study hard,
and verygoodkids, they are good and Christian people
but meanwhile your own children, God of God are
suffering from psoriasis and psychologically
unstable, so why oh God of all the gods of clay
do your children suffer and have tongues of clay?
Your children are your children and seem step-children.
But their children, their grandchildren, their generations
are not like ours this bunch of degenerate
and untouchable fathers and mothers of beggars
yet these your children, God of gods, are still
your children and they recognise you and they do
just what you told them they should do, while they
make the signs, make the sign of the cross, gulp down
hosts like they are dying of hunger (though they are full)
and your priests absolve them, assent and eat with them
oysters and whatever debilities they have,
and they give a blessing to their menstrual women
so that they will bear children and they do bear them,
yet there are hardly any of us, or they die
of natural causes or commit suicide.
Is there a reason why? There is no reason why.
You are the God it occurs to you to be.
.

by Armando Uribe
from Odio lo que odio, rabio como rabio
publisher: Editorial Universitaria, Santiago de Chile, 1998

Original Spanish after the jump

Read more »

Saturday, March 31, 2012

Debate the Usefulness of Election Models

Fivethirtyeight-0326-models1996-blog480There a debate on the issuse over at the NYT's FiveThirtyEight. First, Nate Silver:

…Lynn Vavreck’s excellent 2009 book, “The Message Matters,” for instance, made the following claim:

The economy is so powerful in determining the results of U.S. presidential elections that political scientists can predict winners and losers with amazing accuracy long before the campaigns start.

To be clear, that is the publisher’s copy and not Ms. Vavreck’s. However, statements like these have become fairly common, especially among a savvy group of bloggers and writers who sit at the intersection of political science and the mainstream media (a space that this blog, of course, occupies).

But is it true? Can political scientists “predict winners and losers with amazing accuracy long before the campaigns start”?

The answer to this question, at least since 1992, has been emphatically not. Some of their forecasts have been better than others, but their track record as a whole is very poor.

And the models that claim to be able to predict elections based solely on the fundamentals — that is, without looking to horse-race factors like polls or approval ratings — have done especially badly. Many of these models claim to explain as much as 90 percent of the variance in election outcomes without looking at a single poll. In practice, they have had almost literally no predictive power, whether looked at individually or averaged together.

John Sides responds:

I am less critical of the accuracy of these models than is Nate. For one, forecasters have different motives in constructing these models. Some are interested in the perfect forecast, a goal that may create incentives to make ad hoc adjustments to the model. Others are more interested in theory testing — that is, seeing how well election results conform to political science theories about the effects of the economy and other “fundamentals.” Models grounded in theory won’t be (or at least shouldn’t be) adjusted ad hoc. If so, then their out-of-sample predictions could prove less accurate, on average, but perfect prediction wasn’t the goal to begin with. I haven’t talked with each forecaster individually, so I do not know what each one’s goals are. I am just suggesting that, for scholars, the agenda is sometimes broader than simple forecasting.

Second, as Nate acknowledges but doesn’t fully explore (at least not in this post), the models vary in their accuracy. The average error in predicting the two-party vote is 4.6 points for Ray Fair’s model, but only 1.72 points for Alan Abramowitz’s model. In other words, some appear better than others — and we should be careful not to condemn the entire enterprise because some models are more inaccurate.

Third, if we look at the models in a different way, they arguably do a good enough job.

Arguing Science as Faith

Stanley fish professor bloggerFirst, Stanley Fish over at the NYT's Opionator:

… [Chris] Hayes…posed the following question [to Richard Dawkins and Steven Pinker]: If you hold to the general skepticism that informs scientific inquiry — that is, if you refuse either to anoint a viewpoint in advance because it is widely held or to send viewpoints away because they are regarded as fanciful or preposterous — how do you respond to global-warming deniers or Holocaust deniers or creationists when they invoke the same principle of open inquiry to argue that they should be given a fair hearing and be represented in departments of history, biology and environmental science? What do you do, Hayes asked, when, in an act of jujitsu, the enemies of liberal, scientific skepticism wield it as a weapon against its adherents?

Dawkins and Pinker replied that you ask them to show you their evidence — the basis of their claim to be taken seriously — and then you show them yours, and you contrast the precious few facts they have with the enormous body of data collected and vetted by credentialed scholars and published in the discipline’s leading journals. Point, game, match.

Not quite. Pushed by Hayes, who had observed that when we accept the conclusions of scientific investigation we necessarily do so on trust (how many of us have done or could replicate the experiments?) and are thus not so different from religious believers, Dawkins and Pinker asserted that the trust we place in scientific researchers, as opposed to religious pronouncements, has been earned by their record of achievement and by the public rigor of their procedures. In short, our trust is justified, theirs is blind.

It was at this point that Dawkins said something amazing, although neither he nor anyone else picked up on it. He said: in the arena of science you can invoke Professor So-and-So’s study published in 2008, “you can actually cite chapter and verse.”

Jerry Coyne responds to Fish:

Fish’s big mistake: the reasons undergirding that belief are not that we can engage in a lot of philosophical pilpul to justify using reason and evidence to find out stuff about the universe. Rather, the reasons are that it works: we actually can understand the universe using reason and evidence, and we know that because that method has helped us build computers and airplanes, go to the moon, cure diseases, improve crops, and so on. All of us agree on these results. We simply don’t need a philosophical justification, and I scorn philosophers who equate religion and science because we don’t produce one. Religion doesn’t lead to any greater understanding of reality. Indeed, they can’t even demonstrate to everyone’s satisfaction that a deity exists at all! The unanimity around evidence that antibiotics curse infections, that the earth goes around the sun, and that water has two hydrogen atoms and one oxygen atom, is not matched by any unamity of the faithful about what kind of deity there is, what he/she/it is like, or how he/she/it operates. In what way has religion, which indeed aims to give us “understanding” has really produced any understanding? Fish goes on:

People like Dawkins and Pinker do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description. They begin with the assumption (an act of faith) that the world is an object capable of being described by methods unattached to any imputation of deity, and they then develop procedures (tests, experiments, the compilation of databases, etc.) that yield results, and they call those results reasons for concluding this or that. And they are reasons, but only within the assumptions that both generate them and give them point.

Yes, but we get results that all sane people agree on, and that actually help us get further results that help us solve problems and figure out why things are they way they are. Note how weaselly Fish is here by using the word “act of faith” to apply to both science and religion. Yes, it was originally an act of faith to assume that there was an external reality that could be comprehended by naturalistic processes, but it is no longer an act of faith: it is an act of confidence.

The Control Revolution And Its Discontents

Uncanny-Valley-Sweet-SpotAshwin Parameswaran over at Macroeconomic Resilience:

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

A Smithsonian Q & A with E. O. Wilson

E.-O.-WilsonFrom Carl Zimmer's interview with E.O. Wilson, over at the Loom:

Q: Just to take one example that the critics raised, they talked about how inclusive fitness theory makes a prediction about sex allocation, about the investment in different sexes in the offspring. And they say this is something that inclusive fitness predicts and we’ve gone out and we’ve done a lot of tests to see if that’s true and they find these ratios in lots of animals as predicted by that theory. When they make that sort of argument, what’s your response?

A: It’s a little bit like Ptolemaic astronomy: epicycles will always give the exact results if you’re willing to add them. And in this case–I have pointed this out as well–there’s a flaw in the reasoning about the studies of investment, particularly in whether you invest more in males or females in the social insect societies.

If you have only one female who is queen in the colony, and if that queen has mated only once so that her offspring are that close, then you should see because of the implications of haploid/diploid, the way sex is determined in ants, bees, wasps. You should see a favoring of investment in new queens, over investment in males as measured by the amount of biomass. And that inequality does exist and it should be three to one investment in the weight. And that has been what is thought to be a very powerful argument.

However, this I believe has a major flaw in the reasoning. The colony wishes to make an investment in males versus females in numbers that would be most advantageous in having a female successfully mated, when they leave the nest to get mated, bees, ants, wasps. And therefore, the colony should be trying to get something closer to a one-to-one investment.

And since females are much bigger–they have to have all that fats and ovary and so on–and males are much smaller because in most of these social insects. All they have to do is find a female, deliver their sperm, and die. So the males are much smaller.

This means then that getting a one-to-one ratio in sex that is the same as you see throughout the rest of the animal kingdom, means that you will be having to invest much more in the females when you invest in males. And actually when you make that hypothesis, use that principle, which is the obvious one, then that comes closer to the actual figures we have in the biomass investment.

They [Wilson’s critics] may dispute that, but my point is that they did not by any means find a testing ground on which the old theory could stand or fall. It’s in my view a much simpler and more precise explanation to use the argument of one to one ratios of male and female.

How to become the engineers of our own evolution

From Smithsonian:

Futurism-You-Robot-631The reports regularly come in from around the world: U.S. engineers unveil a prototype bionic eye, Swedish surgeons replace a man’s cancerous trachea with a body part grown in a lab, and a British woman augments her sense of touch by implanting self-made magnetic sensors in her fingertips.

Adherents of “transhumanism”—a movement that seeks to transform Homo sapiens through tools like gene manipulation, “smart drugs” and nanomedicine—hail such developments as evidence that we are becoming the engineers of our own evolution. Enhanced humans might inject themselves with artificial, oxygen-carrying blood cells, enabling them to sprint for 15 minutes straight. They could live long enough to taste a slice of their own 250th birthday cake. Or they might abandon their bodies entirely, translating the neurons of their brains into a digital consciousness. Transhumanists say we are morally obligated to help the human race transcend its biological limits; those who disagree are sometimes called Bio-Luddites. “The human quest has always been to ward off death and do everything in our power to keep living,” says Natasha Vita-More, chairwoman of Humanity+, the world’s largest trans­humanist organization, with nearly 6,000 members.

More here.

Gene behind van Gogh’s sunflowers pinpointed

From Nature:

VanGogh_compA team of plant biologists has identified the gene responsible for the ‘double-flower’ mutation immortalized by Vincent van Gogh in his iconic Sunflowers series. Van Gogh’s 1888 series includes one painting, now at the National Gallery in London, in which many of the flowers depicted lack the broad dark centre characteristic of sunflowers and instead comprise mainly golden petals. This was not simply artistic licence on van Gogh’s part but a faithful reproduction of a mutant variety of sunflower. In a paper published this week in PLoS Genetics1, researchers at the University of Georgia in Athens report that they have pinned down the gene responsible for the mutation, which they say could shed light on the evolution of floral diversity.

A wild sunflower (Helianthus annuus) is not so much a single flower as a composite of tiny florets. The golden ray florets, located at the sunflower’s rim, resemble long petals, are bilaterally symmetrical and do not produce pollen. That job belongs to the disc florets, tiny radially symmetrical blossoms that occupy the sunflower's darker centre. In combination, the two types of florets create the impression of a single large flower, and presumably an attractive target for insect pollinators. The success of the family is determined by floral strategy,” says plant biologist John Burke, who led the study. Because changes in floral symmetry can affect how a plant interacts with pollinators — and therefore its reproductive fitness — the unusual sunflowers depicted by van Gogh piqued Burke’s curiosity.

More here.

Friday, March 30, 2012

A Response to Justin Clarke-Doane’s “Morality and Mathematics: The Evolutionary Challenge”

CoverMatthew Braddock, Andreas Mogensen, and Walter Sinnott-Armstrong over at Pea Soup:

In “Morality and Mathematics: The Evolutionary Challenge” (Ethics 2012), Justin Clarke-Doane raises fascinating and important issues about evolutionary debunking arguments. He argues that insofar as our knowledge of the evolutionary origins of morality poses a challenge for moral realism, exactly similar difficulties will arise for mathematical realism. Clarke-Doane concentrates on the claim that we were not selected to have true moral beliefs, which he interprets to mean that we would have evolved the very same moral beliefs even if the moral facts were radically different from what we take them to be. He argues that an analogous claim holds with respect to our mathematical beliefs: we would have evolved the same mathematical beliefs even if the mathematical facts were radically different from what mathematical realists take them to be. However, even if Clarke-Doane is correct in this, we suspect that his points miss two other kinds of evolutionary debunking arguments, which look to pose a special problem for moral realism.

First, Clarke-Doane twice quotes this claim by Sharon Street: “to explain why human beings tend to make the normative judgments that we do, we do not need to suppose that these judgments are true” (Street, “Reply to Copp”, 208). We take Street’s point to be that one can give a complete explanation of why humans tend to make certain moral judgments rather than others without ever saying anything that implies that any moral beliefs are true. This claim is only about what needs to be said in a complete explanation. It does not assume that moral truths or facts could be different than they are now. Moreover, this claim has no parallel regarding mathematics, because arguably a complete explanation of why humans tend to make certain mathematical judgments (e.g. 1+1=2) rather than others (e.g. 1+1=0) would need to say or imply that 1+1=2 and 1+1≠0. Hence, an evolutionary debunking argument based on this claim by Street understood in this way is not affected by Clarke-Doane’s points.

Towards a New Manifesto

Towards-a-New-ManMartin Jay reviews Theodor Adorno and Max Horkheimer's Towards a New Manifesto, in Notre Dame Philosophical Reviews:

Gretel Adorno was a remarkable woman about whom far too little is known.[1] Although the recent publication of her correspondence with Walter Benjamin has confirmed the impression that she was a formidable intellect in her own right, she remains largely a mystery.[2] What we do know for certain is that she was deeply devoted to her husband Theodor, whom she married in September, l937. Abandoning a career as a chemist to support his work unreservedly, she seems to have been resigned to his extra-marital affairs, and was so despondent after his death in August, l969 that she made a botched suicide attempt. Among the many services she rendered was the dutiful taking of minutes from the intellectual discussions he thought worth recording. Beginning in March of l938, shortly after his emigration to America and full integration into the life of the Institut für Sozialforschung (then resettled in New York), she wrote down a number of conversations he had with the director of the Institute, Max Horkheimer.[3] She continued to play this role well after they all returned to Frankfurt in the early l950s to reestablish the Institute.

One such conversation took place over several days in March and April, l956, when Horkheimer and Adorno sat down to discuss a variety of pressing issues, political, sociological, and philosophical, and Gretel Adorno was there to record the results for posterity, or at least as an aide memoire for later more formal considerations of the same issues. Never intended for publication, the protocols nonetheless appeared in l989 alongside many other drafts and notes as an appendix to the thirteenth volume of Horkheimer's collected works. They were blandly entitled “Diskussion über Theorie und Praxis.” Last year, they were translated into English by the venerable Rodney Livingstone for the New Left Review, and shortly thereafter repackaged as a little book with the much more provocative title Towards a New Manifesto.[4]

It is worth remembering Gretel Adorno's role in their preparation, and not only because it reminds us of the asymmetrical gender relations that prevailed at the Institute (which never had a major female presence in its ranks). Without a tape recorder, she was responsible for faithfully putting down a highly abstract conversation developing at breakneck speed — the editorial foreword rightly calls it “a careening flux of arguments, aphorisms, and asides, in which the trenchant alternates with the reckless, the playful with the ingenuous” — and it has to be accounted a minor miracle that anything coherent survived at all. If we add the tendentious title introduced by the publishers, which turn a relatively minor moment in the dialogue into its telos, it is clear that we have a text that cannot be understood as the polished reflections of authors who wanted these formulations to represent their considered opinions for public consumption. This is, in other words, a far cry from the finely wrought aphorisms of Horkheimer's Dämmerung or Adorno's Minima Moralia.