Karl Polanyi Explains It All

9780807056431

Robert Kuttner in The American Prospect:

In November 1933, less than a year after Hitler assumed power in Berlin, a 47-year-old socialist writer on Vienna’s leading economics weekly was advised by his publisher that it was too risky to keep him on the staff. It would be best both for the Österreichische Volkswirt and his own safety if Karl Polanyi left the magazine. Thus began a circuitous odyssey via London, Oxford, and Bennington, Vermont, that led to the publication in 1944 of what many consider the 20th century’s most prophetic work of political economy, The Great Transformation: The Political and Economic Origins of Our Time.

Polanyi, with no academic base, was already a blend of journalist and public intellectual, a major critic of the Austrian School of free-market economics and its cultish leaders, Ludwig von Mises and Friedrich Hayek. Polanyi and Hayek would cross swords for four decades—Hayek becoming more influential as an icon of the free-market right but history increasingly vindicating Polanyi.

Reluctantly, Polanyi left Vienna for London. Two of his British admirers, the Fabian socialist intellectuals G.D.H. Cole and Richard Tawney, found him a post at an Oxford—sponsored extension school for workers. Polanyi’s assignment was to teach English social and economic history. His research for the course informed the core thesis of his great book; his lecture notes became the working draft. This month marks the 70th anniversary of the book’s publication and also the 50th anniversary of Polanyi’s death in 1964.

Looking backward from 1944 to the 18th century, Polanyi saw the catastrophe of the interwar period, the Great Depression, fascism, and World War II as the logical culmination of laissez-faire taken to an extreme. “The origins of the cataclysm,” he wrote, “lay in the Utopian endeavor of economic liberalism to set up a self-regulating market system.” Others, such as John Maynard Keynes, had linked the policy mistakes of the interwar period to fascism and a second war. No one had connected the dots all the way back to the industrial revolution.

More here.

How the President Got to ‘I Do’ on Same-Sex Marriage

20cover-master1050-v5

Jo Becker in the NYT Magazine (photo illustration by Daan Brand for The New York Times. Obama: Mark Wilson/Getty Images.):

Despite the president’s stated opposition, even his top advisers didn’t believe that he truly opposed allowing gay couples to marry. “He has never been comfortable with his position,” David Axelrod, then one of his closest aides, told me.

Indeed, long before Obama publicly stated that he was against same-sex marriage, he was on the record supporting it. As an Illinois State Senate candidate from Chicago’s liberal Hyde Park enclave, Obama signed a questionnaire in 1996 saying, “I favor legalizing same-sex marriages, and would fight efforts to prohibit such marriages.” But as his ambitions grew, and with them the need to appeal to a more politically diverse electorate, his position shifted.

In the course of an unsuccessful run for a House seat in 2000, he said he was “undecided” on the question. By the time he campaigned for the presidency, he had staked out an even safer political position: Citing his Christian faith, he said he believed marriage to be the sacred union of a man and a woman.

The assumption going into the 2012 campaign was that there was little to be gained politically from the president’s coming down firmly in favor of same-sex marriage. In particular, his political advisers were worried that his endorsement could splinter the coalition needed to win a second term, depressing turnout among socially conservative African-Americans, Latinos and white working-class Catholics in battleground states.

But by November 2011, it was becoming increasingly clear that continuing to sidestep the issue came with its own set of costs. The campaign’s internal polling revealed that the issue was a touchstone for likely Obama voters under 30.

More here.

How Philosophy Makes Progress

Photo_49303_portrait_large

Rebecca Newberger Goldstein in The Chronicle of Higher Education (image: André da Loba for The Chronicle):

Questions of physics, cosmology, biology, psychology, cognitive and affective neuroscience, linguistics, mathematical logic: Philosophy once claimed them all. But as the methodologies of those other disciplines progressed—being empirical, in the case of all but logic—questions over which philosophy had futilely sputtered and speculated were converted into testable hypotheses, and philosophy was rendered forevermore irrelevant.

Is there any doubt, demand the naysayers, about the terminus of this continuing process? Given enough time, talent, and funding, there will be nothing left for philosophers to consider. To quote one naysayer, the physicist Lawrence Krauss, “Philosophy used to be a field that had content, but then ‘natural philosophy’ became physics, and physics has only continued to make inroads. Every time there’s a leap in physics, it encroaches on these areas that philosophers have carefully sequestered away to themselves.” Krauss tends to merge philosophy not with literature, as Wieseltier does, but rather with theology, since both, by his lights, are futile attempts to describe the nature of reality. One could imagine such a naysayer conceding that philosophers should be credited with laying the intellectual eggs, so to speak, in the form of questions, and sitting on them to keep them warm. But no life, in the form of discoveries, ever hatches until science takes over.

There’s some truth in the naysayer’s story. As far as our knowledge of the nature of physical reality is concerned—four-dimensional space-time and genes and neurons and neurotransmitters and the Higgs boson and quantum fields and black holes and maybe even the multiverse—it’s science that has racked up the results. Science is the ingenious practice of prodding reality into answering us back when we’re getting it wrong (although that itself is a heady philosophical claim, substantiated by concerted philosophical work).

And, of course, we have a marked tendency to get reality wrong.

More here.

Hejinian, Whitman, and more on the politics of sleep

A.Cortina_El_suenoSiobhan Phillips at Poetry Magazine:

Sleep is invisible and inconsistent. Aping death, sleep in fact prevents it; at the very least, sleep deprivation leads to premature demise (and before that, failures in mood, metabolism, cognitive function). All animals sleep, and it makes sense for none of them, evolutionarily, since it leaves the sleeper defenseless to predation. Sleep is common, public, a vulnerability we all share—even as sleep also brackets the sleeper in the most impenetrable of privacies. Nothing, everyone knows, is harder to communicate than one’s dream.

And then there’s time. Sleep seems to remove us from the general tyranny of the advancing clock. When you wake, 20 minutes could have passed as easily as three hours. But sleep defines time, dividing day and night. Humans discover circadian rhythm through the urge to sleep. That urge is, of course, cyclic, endless: always more sleep to be had. But sleep measures forward progress by consolidating our sense of the past. (Steven W. Lockley and Russell G. Foster lay out the evidence for this and other facts in their briskly informative Sleep: A Very Short Introduction.) In sleep, our brains decide what to keep and discard. Without sleep, we would dissolve into overloaded confusion.

more here.

crimean meditations

570_jmJacob Mikanowski at The Millions:

Who does the Crimea belong to?

First of all, to the sea that made it. Seven thousand years ago, the Black Sea was much lower than it is today. Then a waterfall tumbled over the Bosporus, and the waters began to rise. The flood cut the Crimea off from the mainland – all the way except for a narrow isthmus called the Perekop. Ever since, it has been a rocky island on the shores of a sea of grass.

The steppes belonged to the nomads. Grass meant horses, and freedom. The steppes stretched north, from the mouth of the Danube to the Siberian Altai. Across the centuries they were home to various nomadic confederations and tribes: Scythians, Sarmatians, Huns, Pechenegs, Cumans, Mongols, and Kipchak Turks. The legendary Cimmerians predate them all; the Cossacks are still there today.

At times, the nomadic tribes made their home in Crimea too.

more here.

The Dadliest Decade

Robinwilliams_397139Willie Osterweil at The Paris Review:

The eighties, at least, were drenched in cocaine and neon, slick cars and yacht parties, a real debauched reaction. But nineties white culture was all earnest yearning: the sorrow of Kurt Cobain and handwringing over selling out, crooning boy-bands and innocent pop starlets, the Contract With America and the Starr Report. It was all so self-serious, so dadly.

Today, by some accounts, the nineties dad is cool again, at least if you think normcore is a thing beyond a couple NYC fashionistas and a series of think pieces. Still, that’s shiftless hipsters dressed like dads, not dads as unironic heroes and subjects of our culture. If the hipster cultural turn in the following decades has been to ironize things to the point of meaninglessness, so be it. At least they don’t pretend it’s a goddamn cultural revolution when they have a kid: they just let their babies play with their beards and push their strollers into the coffee shop. In the nineties, Dad was sometimes the coolest guy in the room. He was sometimes the butt of the joke. He was sometimes the absence that made all the difference. But he was always, insistently, at the center of the story.

more here.

Shift the meat-to-plant ratio

Miles Becker in Conservation:

Meat-heads Can farmers feed an additional 4 billion people with current levels of crop production? A team from the University of Minnesota tackled the problem by shifting the definition of agricultural productivity from the standard measure (tons per hectare) to the number of people fed per hectare. They then audited the global caloric budget and found a way to squeeze out another 4 quadrillion calories per year from existing crop fields. Their starting point was meat production, the most inefficient use of calories to feed people. The energy available from a plant crop such as corn dwindles dramatically when it goes through an intermediate consumer such as a pig. Beef has the lowest caloric conversion efficiency: only 3 percent. Pork and chicken do three to four times better. Milk and eggs, animal products that provide us essential nutrients in smaller batches, are a much more efficient use of plant calories.

The researchers calculated that 41 percent of crop calories made it to the table from 1997 to 2003, with the rest lost mainly to gastric juices and droppings of livestock. Crop calorie efficiency is expected to fall as the meat market grows. Global meat production boomed from 250.4 million tons in 2003 to 303.9 million tons by 2012, as reported by the FAO. Rice production, mainly for human food, dwindled by 18 percent over the same time period. The authors of the 2013 paper, published in Environmental Research Letters, suggested a trend reversal would be desirable. They estimated that a shift from crops destined for animal feed and industrial uses toward human food could hypothetically increase available calories by 70 percent and feed another 4 billion people each year.

More here.

brain’s anti-distraction system

From Phys.Org:

BrainTwo Simon Fraser University psychologists have made a brain-related discovery that could revolutionize doctors' perception and treatment of attention-deficit disorders.

This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific activity that the researchers have identified as helping us prevent distraction. The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master's thesis research. This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.

McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions. “This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It's like finding Waldo in a Where's Waldo illustration,” says Gaspar, the study's lead author.

More here.

Paul de Man Was a Total Fraud: The literary-critical giant lied about every part of his life

Robert Alter in The New Republic:

Article_inset_alterEvelyn Barish begins her impressively researched biography by flatly stating that “Paul de Man no longer seems to exist.” This may be an exaggerated expression of frustration by a biographer whose long-
incubated work now appears after what might have been the optimal time for it. Yet there is considerable truth in what she says. De Man is now scarcely remembered by the general public, though he was the center of a widely publicized scandal in 1988, five years after his death at the age of 64. In the 1970s and 1980s, he was a central figure, an inevitable figure, in American literary studies, in which doctoral dissertations, the great barometer of academic fashion, could scarcely be found without dozens of citations from his writings. But the meteor has long since faded: over the past decade and more, I have only rarely encountered references to de Man in students’ work, committed as they generally are to 
marching with the zeitgeist.

Paul de Man arrived in the United States from his native Belgium in the spring of 1948. He would remain in this country illegally after the expiration of his temporary visa, on occasion finding ways to elude the Immigration and Naturalization Service. But that, as Barish’s account makes clear, was the least of his infractions of the law. Eventually he would be admitted, with a considerable amount of falsification on his part, to the doctoral program in comparative literature at Harvard, from which he would 
receive a degree, in somewhat compromised circumstances, in 1960. He then went on to teach at Cornell, briefly at Johns Hopkins, and most significantly at Yale, where he became a “seminal” scholar and an 
altogether revered figure.

More here.

2014 Pulitzer Prize Winners in Journalism, Letters, Drama and Music

From the New York Times:

ScreenHunter_591 Apr. 16 10.40FICTION

DONNA TARTT

“The Goldfinch” (Little, Brown)

Ms. Tartt’s best-selling novel is about a boy who comes into possession of a painting after an explosion at a museum.

In a phone conversation on Monday, Ms. Tartt, 50, said the novel “was always about a child who had stolen a painting,” but it was only two years into writing the book that she saw “The Goldfinch,” a 17th-century work by Carel Fabritius.

“It fit into the plot of the book I was writing in ways I couldn’t have imagined,” she said. “It had to be a small painting that a child could carry, and that a child could be obsessed by.”

Finalists Philipp Meyer, “The Son”; Bob Shacochis, “The Woman Who Lost Her Soul.”

More here.

Why Nobody Can Tell Whether the World’s Biggest Quantum Computer is a Quantum Computer

D-wave_technology_e_img_5948_export

Leo Mirani and Gideon Lichfield in Quartz (via Jennifer Ouellette, D-Wave Systems photo):

For the past several years, a Canadian company called D-Wave Systems has been selling what it says is the largest quantum computer ever built. D-Wave’s clients include Lockheed Martin, NASA, the US National Security Agency, and Google, each of which paid somewhere between $10 million and $15 million for the thing. As a result, D-Wave has won itself millions in funding and vast amounts of press coverage—including, two months ago, the cover of Time (paywall).

These machines are of little use to consumers. They are delicate, easily disturbed, require cooling to just above absolute zero, and are ruinously expensive. But the implications are enormous for heavy number-crunching. In theory, banks could use quantum computers to calculate risk faster than their competitors, giving them an edge in the markets. Tech companies could use them to figure out if their code is bug-free. Spies could use them to crack cryptographic codes, which requires crunching through massive calculations. A fully-fledged version of such a machine could theoretically tear through calculations that the most powerful mainframes would take eons to complete.

The only problem is that scientists have been arguing for years about whether D-Wave’s device is really a quantum computer or not. (D-Wave canceled a scheduled interview and did not reschedule.) And while at some level this doesn’t matter—as far as we know, D-Wave’s clients haven’t asked for their money back—it’s an issue of importance to scientists, to hopeful manufacturers of similar machines, and to anyone curious about the ultimate limits of humankind’s ability to build artificial brains.

More here.

russia and the history of ‘eurasianism’

CzarPádraig Murphy at The Dublin Review of Books:

There is thus a lively debate in Russia itself on the country’s orientation. The question is, where does the leadership stand in this debate? The answer is difficult, because not only has Russia become more autocratic under Putin, but the circle of real decision-makers has become ever smaller. According to some accounts, it may consist of no more than five people. But, reviewing the period since 2000, when Putin assumed power, it is plausible that it began with a continuation of a commitment to democracy and a market economy, associated with a growing resentment at lack of consideration on the part of the West to certain deep Russian concerns – NATO enlargement, treatment as a poor supplicant, disregard for what are seen as legitimate interests in the neighbourhood etc. Angela Stent cites a senior German official complaining of an “empathy deficit disorder” in Washington in dealing with Russia. The pathology that this caused became progressively more virulent in the intervening years, culminating in 2003 in the invasion of Iraq without any Security Council mandate, indeed, in open defiance of the UN. After this, the New York Times magazine’s Ron Suskind reported on a visit to the Bush White House in 2004 in the course of which he recounts that “an aide” (commonly supposed to be Karl Rove) “said that guys like me were ‘in what we call the reality-based community’, which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality’… ‘That’s not the way the world really works any more’, he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors … and you, all of you, will be left to just study what we do.’”

more here.

The Guggenheim’s Futurism exhibition

Schwabsky_colorblowslineblows_ba_imgBarry Schwabsky at The Nation:

For years I’ve been hearing it said that young artists think art began with Andy Warhol. It’s never been true. But now what I hear is art historians complaining that none of their students want to study anything but contemporary art. Among young art historians, it seems, to delve as far back as the 1960s is to be considered an antiquarian. “They only take my courses because they think they need some ‘background,’” one Renaissance specialist told me. “We have to accept almost anyone who applies saying that they want to study anything before the present, just to give our current faculty something to do.” What a time, when the art historians have less historical consciousness than the artists—and no wonder that the former, these days, show so little interest in what the latter actually do.

When I was a grad student (in a different field), the budding art historians I knew were studying medieval, they were studying mannerism, they were studying the Maya. No one thought of studying living artists. The most adventurous ones might be investigating Italian Futurism. Now the Futurists seem as distant as the Maya. But might this be their own fault?

more here.

The Mental Life of Plants and Worms

Sacks_1-042414_jpg_250x1162_q85Oliver Sacks at the New York Review of Books:

We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.

And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”

Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.”

more here.

Samuel Beckett’s lost work

Tim Martin in The Guardian:

Beckett_2881546bThe years in which the young Samuel Beckett prepared and published his first collection of short stories were, as he later remarked, “bad in every way, financially, psychologically”. In late 1930 he had returned to Dublin from teaching at the École Normale Supérieure in Paris, reluctantly swapping the shabby dazzle of James Joyce’s circle and the fun of drunken nights on the town for a post lecturing at Trinity College that he soon came to hate. Painfully awkward and shy, Beckett was tortured by public speaking, and he dreaded what he called the “grotesque comedy of lecturing” that involved “teaching to others what he did not know himself”. To the horror of his parents, he resigned, bouncing disconsolately between Germany, Paris and London on a family stipend as he tried to get his first novel off the ground. Money became shorter and shorter. In the autumn of 1932, he was forced to “crawl home” to his parents in Dublin when the last £5 note his father sent him was stolen from his digs. He was 26.

At home, however, his problems were far from over. It soon became clear that Dream of Fair to Middling Women, the madcap, erudite, Joycean book he had written at speed in Paris earlier that year, was not going to be the success he imagined. During a miserable spell in London, feeling “depressed, the way a slug-ridden cabbage might expect to be”, he shopped the manuscript around to several publishers: Chatto & Windus, the Hogarth Press, Jonathan Cape and Grayson & Grayson. The letter he wrote later to a friend summarised the results of the trip. “Shatton and Windup thought it was wonderful but they simply could not. The Hogarth Private Lunatic Asylum rejected it the way Punch would. Cape was écoeuré [disgusted] in pipe and cardigan and his Aberdeen terrier agreed with him. Grayson has lost it or cleaned himself with it.” Back in Dublin, wearily recognising that Dream might be unpublishable (it appeared posthumously in 1992), Beckett devoted his remaining energy to compiling a volume of short stories. Like his novel, these covered episodes in the life of Belacqua Shuah, a Dublin student who shared the author’s obsession with Dante and Augustine as well as his hang-ups about sex.

More here.

Sperm RNA carries marks of trauma

Virginia Hughes in Nature:

RatTrauma is insidious. It not only increases a person’s risk for psychiatric disorders, but can also spill over into the next generation. People who were traumatized during the Khmer Rouge genocide in Cambodia tended to have children with depression and anxiety, for example, and children of Australian veterans of the Vietnam War have higher rates of suicide than the general population.

Trauma’s impact comes partly from social factors, such as its influence on how parents interact with their children. But stress also leaves ‘epigenetic marks’ — chemical changes that affect how DNA is expressed without altering its sequence. A study published this week in Nature Neuroscience finds that stress in early life alters the production of small RNAs, called microRNAs, in the sperm of mice (K. Gapp et al. Nature Neurosci. http://dx.doi.org/10.1038/nn.3695; 2014). The mice show depressive behaviours that persist in their progeny, which also show glitches in metabolism. The study is notable for showing that sperm responds to the environment, says Stephen Krawetz, a geneticist at Wayne State University School of Medicine in Detroit, Michigan, who studies microRNAs in human sperm. (He was not involved in the latest study.) “Dad is having a much larger role in the whole process, rather than just delivering his genome and being done with it,” he says. He adds that this is one of a growing number of studies to show that subtle changes in sperm microRNAs “set the stage for a huge plethora of other effects”.

More here.

Tuesday Poem

.
I remember your square jaw
father
Strong and viselike
Your grip
Of my hand father
That wouldn’t let go

I remember you at the bottom of the stairs
father
Telling me
We had to go son
now

I remember the hat
The small brim
With its feather
father
You always wore
As if leaving without it
Was like being naked in the sun

I remember you standing
father
Behind the old glass counter
With its huge crack
weight upon your right foot
father

I remember that subtle smile
father
Showing only a portion
Of the false teeth
You despised

I remember you father asking me
father
With your worried look father
Why I liked that girl
With the dark skin

I never knew father
What you father
were thinking

.
by Bill Schneberger
.
.

The conflict between competition and leisure

by Emrys Westacott

ScreenHunter_590 Apr. 14 11.15In 1930 the economist John Maynard Keynes predicted that increases in productivity due to technological progress would lead within a century to most people enjoying much more leisure. He believed that by 2030 the average working week would be around fifteen hours. Eighty-four years later, it doesn't look like this prediction will come true. Most full-time workers work two, three, or four times, that: and many part-time workers would work more hours if they could since they need the money.

So why haven't we come closer to realizing the expectations of Russell and Keynes? In their recent book, How Much Is Enough? Money and the Good Life (Other Press, 2012), Robert and Edward Skidelsky offer an interesting answer. According to them Keynes' mistake was his failure to realize that capitalism has unleashed forces that can't be brought under control. Specifically, it has greatly inflamed a natural human desire for recognition and status, turning it into an insatiable desire for ever more wealth—wealth being the number one determinant of status in our society. If we could just settle for a modest level of comfort, we could work far less. But the yearning for more wealth and more stuff now leads people to spend far more time working than they need to. The same insatiability characterizes our society as a whole. Every politician and most economists take for granted that we should be striving with all our might to achieve economic growth without limit. The wisdom of this relentless, endless pursuit of economic growth is rarely questioned.

The Skidelskys' explanation of why we still work much more than Keynes predicted isn't entirely wrong, but I don't think it's the whole story or even the most important part. It's no doubt true of some people that they are driven to work more than they need to by insatiable greed. But I suspect that far more people work the hours they do because of circumstances beyond their control. For instance, many people work long hours simply because their hourly wage is quite low, so they work overtime, or perhaps take a second job, just in order to have enough to live on. Some live in expensive metropolitan areas like Boston or San Francisco, so even though they make a good wage, they actually need a full time job even to secure a fairly modest level of comfort, given the cost of housing. Many people keep working full time, even though they'd like to retire or go part time, because only a full time job will provide indispensible benefits like health insurance and a pension. And lots of people would like to cut back the hours they work but can't for a simple reason: their boss won't let them.

But there's also another factor preventing us from achieving a more leisured and balanced lifestyle, and that is the intensely competitive social environment in which we live.

Read more »