Why Only One Top Banker Went to Jail for the Financial Crisis

Jesse Eisinger in NYTimes Magazine:

BankerSerageldin’s life was about to become more ascetic. Two months earlier, he sat in a Lower Manhattan courtroom adjusting and readjusting his tie as he waited for a judge to deliver his prison sentence. During the worst of the financial crisis, according to prosecutors, Serageldin had approved the concealment of hundreds of millions in losses in Credit Suisse’s mortgage-backed securities portfolio. But on that November morning, the judge seemed almost torn. Serageldin lied about the value of his bank’s securities — that was a crime, of course — but other bankers behaved far worse. Serageldin’s former employer, for one, had revised its past financial statements to account for $2.7 billion that should have been reported. Lehman Brothers, AIG, Citigroup, Countrywide and many others had also admitted that they were in much worse shape than they initially allowed. Merrill Lynch, in particular, announced a loss of nearly $8 billion three weeks after claiming it was $4.5 billion. Serageldin’s conduct was, in the judge’s words, “a small piece of an overall evil climate within the bank and with many other banks.” Nevertheless, after a brief pause, he eased down his gavel and sentenced Serageldin, an Egyptian-born trader who grew up in the barren pinelands of Michigan’s Upper Peninsula, to 30 months in jail. Serageldin would begin serving his time at Moshannon Valley Correctional Center, in Philipsburg, where he would earn the distinction of being the only Wall Street executive sent to jail for his part in the financial crisis.

Read the rest here.

The urban paradox

Tom Cowan at openDemocracy:

Insurgent cityIn the current conjuncture, cities are sites of two counterposed tendencies. First, the city is upheld as the physical metonym of modernity, the unsurpassable form of human progress, wherein any manner of economic, social and environmental ills may be treated—where non-people become people, where technology and smartness come to govern political and social contestations, where human resilience and innovation (no matter how destitute such humans may be) can mitigate the oppressive character of capital-led urban growth. Against and yet within this, largely neo-liberal, imagination exists the global trend of urban retraction, of bordering, segregation, fragmentation, state withdrawal, enclave-ing. The traditional model of urban entrepreneurialism which David Harvey discussed in the 1980s is today optimised from particular, mostly elite fragments of accumulation, (the mega-event, the gated community, the mall, etc.) marginalising entire populations, entire ways of thinking and being deemed obsolete. These are two contradictory arms of neoliberal urbanism.

Cities, whether moving from established welfarist models or from longer heritages of fragmentation, are clenched in these two contradictory logics, of urban saviourism and of withdrawal. The space wherein the utopian conception of the city operates is getting smaller and smaller, higher and higher. There are examples all over the world—from India’s “Smart” Dholera and privately governed Gurgaon, to inner London’s property-led social cleansing of working-class, black and otherwise undesirable residents, to Durban’s brutal oppression and marginalisation of shack-dwellers and the privatised “charter” Cities of the US and Honduras. This is as true of older urban settings as the new developments (even if more acute in the latter) and is particularly pertinent given the mass capitalist urban productivism still predominating in China, India, South Africa and Nigeria.

Importantly this paradox breeds conflicts: the counter-logics of increasing fragmentation and mass influxes of urban population for example are necessarily complicit and intertwined, proliferating and confronting spaces of obstruction, contradiction and resistance. Conflicts over whom and what our urban environments are for, overthepervasive and destructive rhetorics of “renewal”, “regeneration”, “beautification”, “resilience” and “the modern”. Within these conflicts, and amid pervasive mass dispossessions, residents of the city are utilising their own produced spaces to obstruct, expel and resist the devastating effects of the urban paradox.

Read the rest here.

Friday Poem

The Bridgetower

…..per il Mulatto Brischdauer
…..gran pazzo e compositore mulattico
…………. ––Ludwig van Beethoven, 1803

If was at the Beginning. If
he had been older, if he hadn’t been
dark, brown eyes ablaze
in that remarkable face;
if he had not been so gifted, so young
a genius with no time to grow up;
if he hadn’t grown up, undistinguished,
to an obscure old age.
If the piece had actually been,
as Kreutzer exclaimed, unplayable––even after
our man had played it, and for years,
no one else was able to follow––
so that the composer’s fury would have raged
for naught, and wagging tongues
could keep alive the original dedication
from the title page he shredded.

Oh, if only Ludwig had been better-looking,
or cleaner, or a real aristocrat,
von instead of the unexceptional van
from some Dutch farmer; if his ears
had not already begun to squeal and whistle;
if he hadn’t drunk his wine from lead cups,
if he could have found True Love. Then
the story would have held: In 1803
George Polgreen Bridgetower,
son of Friedrich Augustus the African Prince
and Maria Anna Sovinki of Biala in Poland,
traveled from London to Vienna,
where he met the Great Master
who would stop work on his Third Symphony
to write a sonata for his new friend
to premiere triumphantly on May 24th,
whereupon the composer himself
leapt up from the piano to embrace
his “lunatic mulatto.”

Who knows what would have followed?
They might have palled around some,
just a couple of wild and crazy guys
strutting the town like rock stars,
hitting the bars for a few beers, a few laughs . . .
instead of falling out over a girl
nobody remembers, nobody knows.

Then this bright-skinned papa’s boy
could have sailed his fifteen-minute fame
straight into the record books––where,
instead of a Regina Carter or Aaron Dworkin or Boyd Tinsley
sprinkled here and there, we would find
rafts of black kids scratching out scales
on their matchbox violins so that some day
they might play the impossible:
Beethoven’s Sonata No. 9 in A Major, Op. 47,
also known as The Bridgetower.

by Rita Dove
from Sonata Mullatica
W.W. Norton, 2009

George Augustus Polgreen Bridgetower

David Lynch, Hiding in Plain Sight

Dan Piepenbring in Paris Review:

David_lynch_-microphone_-10aug2007As David Foster Wallace wrote in 1995, “Lynch’s movies are about images and stories in his head that he wants to see made external and complexly real.” Lynch has expanded the grammar of film as much as any director of his era; he has as singular and penetrating a vision of American life as any living artist—and he had almost nothing to say about his work, especially not about his movies. He wasn’t going to talk shop. He wasn’t, with the exception of a bit about Eraserhead’s Lady in the Radiator, going to talk about the stories in his head. Sometimes it seemed he wasn’t going to talk, period. “One time I heard it,” he said of Bobby Vinton’s cover of “Blue Velvet”—“and images started coming from this song…”

Aha! What images?

David Lynch didn’t say what images.

Holdengräber tried to draw him out. It was just the two of them onstage, under the hot lights—two chairs, a small table, an area rug, and an awed hush. In his gentle Continental croon, Holdengräber read aloud a few florid quotations and asked Lynch to react. Lynch didn’t care much for reaction. The quotations were beautiful, he acknowledged. Many things last night would be described as beautiful. On a big screen behind the men, an iconic image from Blue Velvet appeared: a detached, decaying ear half buried in a suburban lawn. And Holdengräber ventured, coyly—he might ask, why this ear … “You’d have to see the film,” Lynch said. This got a big laugh. Silly Holdengräber, the audience seemed to say, with his insistence on interpretation, his outmoded desire to know more! Still, maybe he could goad Lynch into saying something about his thoughts, his inspirations—the intro to Eraserhead crept onto the big screen, and Holdengräber recited another beautiful quotation about the dreamlike nature of the cinema…“It’s done,” Lynch said conclusively of Eraserhead. “Then it goes out into the world, and it is what it is.” Now the people broke into boisterous applause. Absolutely! It is what it is! What a genius! Who could deign to take issue with such self-evident, tautological truth?

More here.

Humanity in jeopardy

Max Tegmark in KurzweilAI:

Watson_Jeopardy1Exactly three years ago, on January 13, 2011, humans were dethroned by a computer on the quiz show Jeopardy! A year later, a computer was licensed to drive cars in Nevada, after being judged safer than a human. (link to article) What’s next? Will computers eventually beat us at all tasks, developing superhuman intelligence? I have little doubt that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations. But will it happen anytime soon? Many experts are skeptical, while others such as Ray Kurzweil predict it will happen by 2045. What I think is quite clear is that if it happens, the effects will be explosive: as Irving Good realized in 1965, machines with superhuman intelligence could rapidly design even better machines. Vernor Vinge called the resulting intelligence explosion ”the singularity,” arguing that it was a point beyond which it was impossible for us to make reliable predictions. After this, life on Earth would never be the same. Whoever or whatever controls this technology would rapidly become the world’s wealthiest and most powerful, outsmarting all financial markets, out-inventing and out-patenting all human researchers, and out-manipulating all human leaders. Even if we humans nominally merge with such machines, we might have no guarantees whatsoever about the ultimate outcome, making it feel less like a merger and more like a hostile corporate takeover.

In summary, will there be a Singularity within our lifetime? And is this something we should work for or against? On one hand, it could potentially solve most of our problems, even mortality. It could also open up space, the final frontier: unshackled by the limitations of our human bodies, such advanced life could rise up and eventually make much of our observable universe come alive. On the other hand, it could destroy life as we know it and everything we care about — there are ample doomsday scenarios that look nothing like the Terminator movies, but are far more terrifying.

More here.

Thursday, May 1, 2014

Why Is the US the Only Country that Celebrates ‘Loyalty Day’ on May 1?

Jon Wiener in The Nation:

ScreenHunter_598 May. 01 20.08For more than a century, May 1 has been celebrated as International Workers’ Day. It’s a national holiday in more than eighty countries. But here in the land of the free, May 1 has been officially declared “Loyalty Day” by President Obama. It’s a day “for the reaffirmation of loyalty”—not to the international working class, but to the United States of America.

Obama isn’t the first president to declare May 1 Loyalty Day—that was President Eisenhower, in 1959, after Congress made it an official holiday in the fall of 1958. Loyalty Day, the history books explain, was “intended to replace” May Day. Every president since Ike has issued an official Loyalty Day proclamation for May 1.

The presidential proclamation always calls on people to “display the flag.” In case you were wondering, that’s the stars and stripes, not the red flag. Especially in the fifties, if you didn’t display the stars and stripes on Loyalty Day, your neighbors might conclude that you were some kind of red.

During the 1930s and 1940s, May Day parades in New York City involved hundreds of thousands of people. Labor unions, Communist and Socialist parties, and left-wing fraternal and youth groups would march down Fifth Avenue and end up at Union Square for stirring speeches on class solidarity.

More here.

Why I Teach Plato to Plumbers: Liberal arts and the humanities aren’t just for the elite

Scott Samuelson in The Atlantic:

LeadOnce, when I told a guy on a plane that I taught philosophy at a community college, he responded, “So you teach Plato to plumbers?” Yes, indeed. But I also teach Plato to nurses’ aides, soldiers, ex-cons, preschool music teachers, janitors, Sudanese refugees, prospective wind-turbine technicians, and any number of other students who feel like they need a diploma as an entry ticket to our economic carnival. As a result of my work, I’m in a unique position to reflect on the current discussion about the value of the humanities, one that seems to me to have lost its way.

As usual, there’s plenty to be worried about: the steady evaporation of full-time teaching positions, the overuse and abuse of adjunct professors, the slashing of public funding, the shrinkage of course offerings and majors in humanities disciplines, the increase of student debt, the peddling of technologies as magic bullets, the ubiquitous description of students as consumers. Moreover, I fear in my bones that the supremacy of a certain kind of economic-bureaucratic logic—one of “outcomes,” “assessment,” and “the bottom-line”—is eroding the values that undergird not just our society’s commitment to the humanities, but to democracy itself.

The problem facing the humanities, in my view, isn’t just about the humanities. It’s about the liberal arts generally, including math, science, and economics. These form half of the so-called STEM (science, technology, engineering, math) subjects, but if the goal of an education is simply economic advancement and technological power, those disciplines, just like the humanities, will be—and to some degree already are—subordinated to future employment and technological progress. Why shouldn’t educational institutions predominately offer classes like Business Calculus and Algebra for Nurses? Why should anyone but hobbyists and the occasional specialist take courses in astronomy, human evolution, or economic history?

More here.

Why Low-Income Kids Are Thriving in Salt Lake City

Nancy Cook in The Atlantic:

UtahIn the summer of 2013, four prominent economists from Harvard and the University of California, Berkeley, named Salt Lake City one of the best places in the country for upward mobility. Low-income kids who grew up in the region, the researchers found, had some of the greatest chances of moving up the income ladder as they aged.

Salt Lake City, with roughly 180,000 residents, shared the admirable distinction with major coastal cities such as San Diego, San Francisco, Seattle, Washington, D.C., New York, and Boston. The list generated significant buzz in academic, economic, and urban-planning circles both for its broad scope and for its finding that where people live can profoundly affect their children's economic futures. The U.S. is no longer uniformly the land of opportunity, the study showed, unless you happened to live in the right place.

For their part, Salt Lake City officials heralded the study as yet another piece of evidence for the region's high quality of life, alongside its low unemployment rate. But for another group of locals—social workers, educators, and community advocates—the study was also a cautionary tale…

To maintain its status as a model for the American Dream, Salt Lake City government officials, civic leaders, and the powerful Mormon church are pursuing various strategies in schools and neighborhoods to try to continue to give lower-income children the best boost up the income ladder.

Salt Lake City still possesses two of the major strengths that made it one of the best cities in the country for upward mobility: a strong middle class and a less extreme gap between the rich and the poor. But what worries Salt Lake City academics and advocates now is that the city has fallen behind on other factors as it has become more global and diverse. “We're beginning to see the start of intergenerational poverty here, whereas we have not seen that in the past,” says Pamela Perlich, a senior research economist with the local Bureau of Economic and Business Research. And that raises questions about whether Salt Lake City, like other rapidly changing urban areas, can continue to provide the best opportunities for its low-income kids.

Read the rest here.

the way life thinks

TLS_King_784109hBarbara J. King at the Times Literary Supplement:

Semiosis is at the centre of Kohn’s framework for explaining how the forest “thinks”. Kohn relies heavily on Charles Peirce’s notion that signs should be defined broadly to include those with and those without linguistic properties. Peirce’s tripartite division of signs is well known. Icons are signs of likeness, reflecting the properties of that to which they refer, in the way that a photograph is – or as the sound tsupu does, representing a peccary who slips into a pool of water in the forest. (Kohn writes: “Once I tell people what tsupu means, they often experience a sudden feel for its meaning: ‘Oh, of course, tsupu!’”) Indices, by contrast, point to something else, as when a palm tree crashes down in the forest and a monkey understands that something dangerous may be happening and that it needs to move. All life, for Kohn, participates in icons and indices, whereas the third type of sign – symbols – involve convention and are unique to humans. When we link signs with all of life, we break out beyond “the conflation of representation with language” that characterizes most of anthropology and even “posthuman approaches that seek to dissolve the boundaries that have been erected to construe humans as separate from the rest of the world”.

more here.

No Choice but Freedom

Friedman-383

Steve Randy Waldman in New Inquiry:

Milton Friedman famously argued that there is a link between capitalism and freedom. “A society which is socialist,” he wrote in Capitalism and Freedom, “cannot also be democratic, in the sense of guaranteeing individual freedom.” As an empirical matter, that statement remains as roughly true now as Friedman’s related claim that there is “no example in time or place of a society that has been marked by a large measure of political freedom, and that has not also used something comparable to a free market to organize the bulk of economic activity.” A socialist might quibble that the assertion is crafted to elide very stark distinctions between societies, hiding some important dimensions in which politically free Sweden, for example, diverges in economic policy from the laissez-faire Anglosphere. But even in the social-democratic Nordics, much of economic life, the “bulk” perhaps, plays out through money-intermediated arrangements between nonstate actors. That is, in markets.

The interesting question is why. Why does there seem to be a relationship between capitalism and political freedom? Friedman emphasizes the role of competitive and decentralized market actors in checking the concentration of power that political authorities might use to prevent freedom and dissent. If one is fired by a private employer for expounding unpopular views, there is always another private employer who may have different views. If employment is directed by a hierarchical state, the cost of dissidence may be penury or starvation.

Friedman argues that economic freedom, in constituting the individuals’ ability to engage in whatever voluntary activities they wish to pursue, including exchanges of goods or services for money, is not only a means toward, but is freedom itself. But what is freedom, itself?

More here.

The Self Portrait: a Cultural History

2014+15van selfie2Andrew Marr at The New Statesman:

It would be possible (in fact, very easy) to write an agreeable book about the history of the self-portrait that took in the great masters, from Dürer and Michelangelo to Rembrandt, Van Gogh and Andy Warhol; a book that made reassuring and familiar points about artists we know and love, even if it didn’t change anything. Thank God, this is not that book.

What’s more – and here is a rare comment to find in a book review – James Hall’s cultural history is not long enough. The closely linked essays, taking us from the scribbled self-portraits of medieval monks right through to works composed of tin cans of excrement and photographs of body parts, include many revelations but Hall’s boundless curiosity explodes in all directions from the relatively few pages he has been allocated. Mostly we want more: more detail, more explanation and many more pictures. Given that this is a chunky and well-illustrated volume, that is meant as high praise.

Hall has quite a story to tell, because the history of the self-portrait is also the history of the status of the artist. For the ancient Greeks and Romans, self-portraits were relatively rare and unimportant; the cultural status of artists was very low.

more here.

Inuvik: the Canadian Arctic

Tumblr_mpnk0g8hqr1szyi36o1_1280Audrea Lim at n+1:

But in the Mackenzie Delta, the relationship between indigenous communities and the oil industry is complicated. In the popular imagination, oil usually appears as a Manichean fight between indigenous communities and oil companies, but it was the promise of oil that produced Inuvik. The history of modern development in the Delta—large-scale infrastructure development, the shift to settlement living and survival through the wage-economy, and integration into global economic networks—cannot be separated from the history of oil and gas. Unlike in Alberta, where agriculture spurred the development of infrastructure long before the oil industry swooped in, resource exploitation has been the single largest factor spurring development in the Delta, and thereby integrating it with the global economy and providing cheaper and better access to resources and amenities. At the same time, Canada’s treaties and land agreements with the Inuit and First Nations have laid the groundwork for an aboriginal ownership stake in the Mackenzie Gas Project, that some residents of the Delta hope will fund improvements to their communities.

more here.

Claiming a Copyright on Marx? How Uncomradely

Marx2-master180

Noam Cohen in the NYT (photo Jason Henry for The New York Times):

The Marxist Internet Archive, a website devoted to radical writers and thinkers, recently received an email: It must take down hundreds of works by Karl Marx and Friedrich Engels or face legal consequences.

The warning didn’t come from a multinational media conglomerate but from a small, leftist publisher, Lawrence & Wishart, which asserted copyright ownership over the 50-volume, English-language edition of Marx’s and Engels’s writings.

To some, it was “uncomradely” that fellow radicals would deploy the capitalist tool of intellectual property law to keep Marx’s and Engels’s writings off the Internet. And it wasn’t lost on the archive’s supporters that the deadline for complying with the order came on the eve of May 1, International Workers’ Day.

“Marx and Engels belong to the working class of the world spiritually, they are that important,” said David Walters, one of the organizers of the Marxist archive. “I would think Marx would want the most prolific and free distribution of his ideas possible — he wasn’t in it for the money.”

Still, Mr. Walters said the archive respected the publisher’s copyright, which covers the translated works, not the German originals from the 19th century. On Wednesday, the archive removed the disputed writings with a note blaming the publisher and a bold headline: “File No Longer Available!”

The fight over online control of Marx’s works comes at a historical moment when his ideas have found a new relevance, whether because the financial crisis of 2008 shook people’s confidence in global capitalism or, with the passage of time, the Marx name has become less shackled to the legacy of the Soviet Union. The unlikely best seller by the French economist Thomas Piketty, “Capital in the 21st Century,” harks back to Marx’s work, examining historical trends toward inequality in wealth.

More here.

Atheists: The Origin of the Species

Julian Baggini in The Guardian:

AtheistsLike new Labour, so-called New Atheism did not just replace the old variety but, for a while at least, almost totally occluded it. Atheism is now sometimes discussed as though it began with the publication of Richard Dawkins's The God Delusion in 2006. To put these recent debates – or more often than not, flaming rows – in some sort of perspective, a thorough history of atheism is long overdue. The godless may not at first be pleased to discover that the person who has stepped up to the plate to write it comes from the ranks of the opposition. But Nick Spencer, research director of the Christian thinktank Theos, is the kind of intelligent, thoughtful, sympathetic critic that atheists need, if only to remind them that belief in God does not necessarily require a loss of all reason.

Spencer's story is designed to illuminate our present, so he understandably restricts himself to western Europe from the late middle ages onwards. It is a compendious though not definitive account, which shows why atheism is not simply the natural result of the rise of scientific knowledge, and religion a simplistic vestige of more ignorant times. Spencer rightly points out that, far from being enemies of religion, science and rationality were often most enthusiastically championed by men and women of faith. Locke and Newton were, for instance, both profoundly motivated by their Christianity.

More here.

Personal, Political, Physiological

From Harvard Magazine:

Radcliffe_CAs far as sex and gender are concerned, malaria seems at first to be an equal-opportunity killer; the parasite, transmitted by mosquitoes, affects women and men alike. Yet sex and gender intrude even into this seemingly isolated medical realm. As a report from the World Health Organization details, biological sex differences alter malaria outcomes—changes in the immune system, for instance, make pregnant women especially susceptible to the disease. Meanwhile, social notions of gender may have an effect on outcomes as well: men, working in fields, may be more frequently exposed to the disease, while women, caring for children and often lacking autonomy, may be less likely to seek treatment. “Health is rarely only about health alone,” said Lizabeth Cohen, dean of the Radcliffe Institute for Advanced Study (RIAS), as she opened a two-day conference titled “Who Decides? Gender, Medicine, and the Public Health.” A series of panels explored various intersections of the social and biological realms, ranging from gendered definitions of illness and inequities in research funding to political debates over access to care. As conference organizer Janet Rich-Edwards, associate professor of epidemiology and co-director of the RIAS Academic Ventures science program, declared, “The personal is the political is the physiological.”

The conference opened with a reading by Tony Award-winning playwright and activist Eve Ensler, author of The Vagina Monologues, from her recent book, In the Body of the World: A Memoir of Cancer and Connection. Ensler, who will be artist-in-residence at the American Repertory Theater over the next several years, described how, following her childhood experience of sexual abuse, cancer and chemotherapy brought her to acknowledge and accept her own body. “When I was done with all those months of chemo, I felt like something had been burned away,” she said. Upon returning to the Democratic Republic of the Congo, where she had been working to help victims of rape and sexual assault, Ensler envisioned her most recent campaign, One Billion Rising, a global movement that uses dance to protest violence against women. Dance, she said, “is not just rage. It’s joy, it’s possibility.”

More here.

Thursday Poem

How to Write the Great American Indian Novel

All of the Indians must have tragic features: tragic noses, eyes, and arms.
Their hands and fingers must be tragic when they reach for tragic food.

The hero must be a half-breed, half white and half Indian, preferably
from a horse culture. He should often weep alone. That is mandatory.

If the hero is an Indian woman, she is beautiful. She must be slender
and in love with a white man. But if she loves an Indian man

then he must be a half-breed, preferably from a horse culture.
If the Indian woman loves a white man, then he has to be so white

that we can see the blue veins running through his skin like rivers.
When the Indian woman steps out of her dress, the white man gasps

at the endless beauty of her brown skin. She should be compared to nature:
brown hills, mountains, fertile valleys, dewy grass, wind, and clear water.

If she is compared to murky water, however, then she must have a secret.
Indians always have secrets, which are carefully and slowly revealed.

Yet Indian secrets can be disclosed suddenly, like a storm.
Indian men, of course, are storms. The should destroy the lives

of any white women who choose to love them. All white women love
Indian men. That is always the case. White women feign disgust

at the savage in blue jeans and T-shirt, but secretly lust after him.
White women dream about half-breed Indian men from horse cultures.

Indian men are horses, smelling wild and gamey. When the Indian man
unbuttons his pants, the white woman should think of topsoil.

There must be one murder, one suicide, one attempted rape.
Alcohol should be consumed. Cars must be driven at high speeds.

Indians must see visions. White people can have the same visions
if they are in love with Indians. If a white person loves an Indian

then the white person is Indian by proximity. White people must carry
an Indian deep inside themselves. Those interior Indians are half-breed

and obviously from horse cultures. If the interior Indian is male
then he must be a warrior, especially if he is inside a white man.

If the interior Indian is female, then she must be a healer, especially if she is inside
a white woman. Sometimes there are complications.

An Indian man can be hidden inside a white woman. An Indian woman
can be hidden inside a white man. In these rare instances,

everybody is a half-breed struggling to learn more about his or her horse culture.
There must be redemption, of course, and sins must be forgiven.

For this, we need children. A white child and an Indian child, gender
not important, should express deep affection in a childlike way.

In the Great American Indian novel, when it is finally written,
all of the white people will be Indians and all of the Indians will be ghosts.

by Sherman Alexie.

What Piketty Leaves Out

Ap090414012915

Robert Kuttner in The American Prospect:

Despite some losses to financial capital during the Great Depression, the more powerful era of equality in the U.S. began during World War II. The war was a massive macroeconomic stimulus; it produced full employment, stronger unions, and investment of public capital. The government’s wartime policies also repressed private finance in multiple and reinforcing ways, including the Fed’s pegging interest rates on Treasury bonds at a maximum of 2.5 percent, marginal tax rates set as high as 94 percent, and an intensification of the anti-speculative financial regulation of the New Deal. All of this did not end with the war. It had a half-life well into the postwar era, until unions were bashed and finance deregulated beginning in the 1970s.

Piketty mentions some of this briefly but doesn’t focus on the political dynamics, and he is surprisingly blasé about the role of deliberate policy. “Neither the economic liberalization that began around 1980 nor the state intervention that began in 1945 deserves much praise or blame,” he contends. “The most one can say is that state intervention did no harm.” But this can’t be true. The key difference in the two trajectories of non-recovery after World War I and robust recovery after World War II was in the policies pursued.

The aftermath of the first war led to depression and fascism, while World War II was followed by a boom of widely shared prosperity. In the reconstruction period of 1944-1948, policymakers, cognizant of the mistakes of the Treaty of Versailles and the deflationary 1920s, deliberately created the conditions for domestic full-employment welfare states. There was a great deal more to the anomalous era of shared growth than the shrinkage of inherited wealth, though it’s certainly the case that the weakening of financial elites made possible a politics of broad gains for the wage-earning class.

More here.

Wednesday, April 30, 2014

What exactly were the implications of World War I?

Gws_britprisholland_01Gaby Zipfel at Eurozine:

With contemporary historians like Gerd Krumeich assessing the World War from 1914 to 1918 as “one of the formative experiences of the century, perhaps even the decisive factor in shaping it”, an event that each generation examines anew “in the light of old insights and new experiences, and the theoretical approaches gleaned from their own lifeworld”,[1] it would seem high time to ask about the extent to which this war a century ago influenced the gender hierarchy of the western world. Wolfgang Mommsen goes much further, identifying the First World War as, “in a certain sense, the fatal crisis of old bourgeois Europe”, as a result of which “major parts of pre-war orders and institutions were destroyed, but above all the social structures were changed substantially.”[2] This raises the question of the extent to which these changes affected the way gender characters were shaped, how the sexes related to each other and their options for action. Three aspects are always elementary in determining the subject status of the sexes in a given social system: firstly, the question of economic independence, that is, the possibility of supporting oneself financially; secondly, the question of citizenship rights and possibilities for participating in the public sphere, and thirdly, the question of sexual self-determination, that is, of control over one's own body and reproductive capacity. Accordingly, it must be asked to what extent the First World War influenced, changed or reorganized and fortified the gender hierarchy based on complementary public and private spheres.

more here.