Stillicide – stunning meditation on climate crisis

Nina Allan in The Guardian:

I’d never heard the word before. Stillicide,” a corporate executive named Steven thinks to himself around a third of the way through Cynan Jones’s fragmented, marvellously compressed novel of the same title. “Water falling in drops. I challenge myself to get it into a sentence for the [journalists].” I had not heard the word before either, though Jones helpfully opens with a dictionary definition. And it is this image of dripping water and its powers of erosion that comes to define his book as a whole, both as a novel that confronts the challenge of describing what climate crisis might look like, and in the way a slow accretion of pertinent detail gathers cataclysmic momentum.

Stillicide takes place in the near future, when phases of extreme weather have plunged Britain into an alternating cycle of flood and drought. As temperatures continue to rise, smaller rural communities are becoming unsustainable, while the logistics of feeding and watering the growing city population has come to dominate the economic agenda. Steven is a PR spokesman for the corporation in charge of supplying an unnamed city with potable water. Previous attempts to augment the overstretched supply – an overground pipeline, an armoured freight train – have become the focus of terrorist activity, with raiding parties from the countryside violently advancing the demands of those who live beyond the urban centres. A new plan is hatched: an iceberg is to be towed from the Arctic and brought to an “ice dock” where its abundance of pure drinking water can be tapped and distributed. It is hoped that the iceberg will be largely immune from attacks by vigilantes and climate protestors. Moreover, meltwater from the berg – stillicide – can be utilised to irrigate agricultural land throughout the duration of its passage.

More here.



Reason Won’t Save Us

Robert Burton in Nautilus:

In wondering what can be done to steer civilization away from the abyss, I confess to being increasingly puzzled by the central enigma of contemporary cognitive psychology: To what degree are we consciously capable of changing our minds? I don’t mean changing our minds as to who is the best NFL quarterback, but changing our convictions about major personal and social issues that should unite but invariably divide us. As a senior neurologist whose career began before CAT and MRI scans, I have come to feel that conscious reasoning, the commonly believed remedy for our social ills, is an illusion, an epiphenomenon supported by age-old mythology rather than convincing scientific evidence. If so, it’s time for us to consider alternate ways of thinking about thinking that are more consistent with what little we do understand about brain function. I’m no apologist for artificial intelligence, but if we are going to solve the world’s greatest problems, there are several major advantages in abandoning the notion of conscious reason in favor of seeing humans as having an AI-like “black-box” intelligence.

But first, a brief overview as to why I feel so strongly that purely conscious thought isn’t physiologically likely. To begin, manipulating our thoughts within consciousness requires that we have a modicum of personal agency. To this end, rather than admit that no one truly knows what a mind is or how a thought arises, neuroscientists have come up with a number of ingenious approaches designed to unravel the slippery relationship between consciousness and decision-making. In his classic 1980s experiments, University of California, San Francisco, neurophysiologist Benjamin Libet noted a consistent change in brain wave activity (a so-called “ready potential”) prior to a subject’s awareness of having decided to move his hand. Libet’s conclusion was that the preceding activity was evidence for the decision being made subconsciously, even though subjects felt that the decision was conscious and deliberate. Since that time his findings, supported by subsequent similar results on fMRI and direct brain recordings, have featured prominently in refuting the notion of humans possessing free will. However, others presented with the same evidence strongly reject this interpretation.

More here.

Wednesday Poem

Dear No. 24601

The future is an eye that I don’t dare look into
Last night I dreamed I was a ball of fire
and woke up on the wrong side of the room
this is a recurring dream
I share an apartment with my twin sister
Enclosed is a photo of us on a tandem bike
I forget which one I am
Sometimes I wake up believing I am her
she is me
and there is nothing about the day to indicate otherwise
Weeks stack up this way
As a girl I did not do well with other children
Unable to see the fun in games
which were only ever maddening
I paid close attention to the weather
delighting in hail and not much else
save a prized collection of Hummel figurines
derived from the pastoral sketches
of Sister Maria Innocentia Hummel
German Franciscan nun and talented artist
Her simple peaceful works
drew the enduring hatred of Hitler himself
You know Hummel translates as ‘bumblebee’ in German
and they say she was always ‘buzzing around’
What do you think do we grow into our names
or does kismet know a thing
One name can mean too much
the other not nearly enough
The details make a difference
like sitting on the white cushion
as opposed to the blue
white is pure of course
but my soul’s been in the bargain bin since Russia
and Lenin’s tomb
I had a moment there
among the balustrades
and once that moment had expired
it graduated
from a moment to a life

by Sophie Collins
from
Poetry International, 2019

Tuesday, October 22, 2019

Economics’ Biggest Success Story Is a Cautionary Tale

CAMBRIDGE, MA – OCTOBER 14: Esther Duflo and Abhijit Banerjee, who share a 2019 Nobel Prize in Economics with Michael Kremer, answer questions during a press conference at Massachusetts Institute of Technology on October 14, 2019 in Cambridge, Massachusetts. (Photo by Scott Eisen/Getty Images)

Sanjay G. Reddy in Foreign Policy:

RCTs cannot reveal very much about causal processes since at their core they are designed to determine whether something has an effect, not how. The randomistas have attempted to deal with this charge by designing studies to interpret whether variations in the treatment have different effects, but this requires a prior conception of what the causal mechanisms are. The lack of understanding of causation can limit the value of any insights derived from RCTs in understanding economic life or in designing further policies and interventions. Ultimately, the randomistas tested what they thought was worth testing, and this revealed their own preoccupations and suppositions, contrary to the notion that they spent countless hours listening to and in close contact with the poor. It is not surprising that economists doing RCTs have therefore been centrally concerned with the effects of incentives on individual behavior—for instance, examining the idea that contract teachers who fear losing their jobs will be more effective than those with a guarantee of employment.

But valuable innovations in everyday life, whether on the small or large scale, are likely to result from explorations of a more open-ended kind. This requires that people experiment with the institutions of which they are a part, which is not the same as conducting randomized experiments on other people. Policies (and reforms of policies) that go beyond one dimension are essential in a complex environment. For instance, better schools are likely to result both from measures dealing with teachers’ employment and ones dealing with curriculum, community participation, and funding arrangements. RCTs simply cannot advise us on how best to combine all of these, let alone on how to think creatively about them. Better schools may also result from changes that result from improvements in other domains beyond the individual school—for instance, safer neighborhoods, better drug policy, or lessened poverty. The actions needed to achieve better outcomes may sometimes only be possible to undertake at a level going much beyond the locality. A good example is provided by the iodization of salt, which has contributed not only to better health but may also have improved educational outcomes.

More here.

When the C.I.A. Was Into Mind Control

Sharon Weinberger in the New York Times:

In 1955, R. Gordon Wasson set off for southern Mexico to experience a sacred Indian ceremony rumored to provide a “pathway to the divine.” Wasson later extolled the mystical effects of what he called the “magic mushroom,” the Mexican plant used in the ceremony, in a 1957 photo-essay for Life magazine.

Wasson’s article, read by millions, helped set the stage for an eventual cultural revolution that peaked with Timothy Leary, the former Harvard professor who proselytized for LSD and called on Americans to “turn on, tune in, drop out.” The seminal role Wasson’s trip played in promoting mind-bending drugs and the accompanying cultural revolution has been described before, including in Michael Pollan’s recent book, “How to Change Your Mind,” but a new biography by Stephen Kinzer, a former foreign correspondent for The New York Times, adds a key detail to this fascinating history.

“Poisoner in Chief: Sidney Gottlieb and the CIA Search for Mind Control” describes how, unbeknown to Wasson, the spy agency was funding his travel. In fact, Wasson’s trip “would electrify mind control experimenters in Washington whose ambitions were vastly different from his own.”

More here.

Sean Carroll’s Mindscape Podcast: Cory Doctorow on Technology, Monopoly, and the Future of the Internet

Sean Carroll in Preposterous Universe:

Like so many technological innovations, the internet is something that burst on the scene and pervaded human life well before we had time to sit down and think through how something like that should work and how it should be organized. In multiple ways — as a blogger, activist, fiction writer, and more — Cory Doctorow has been thinking about how the internet is affecting our lives since the very beginning. He has been especially interested in legal issues surrounding copyright, publishing, and free speech, and recently his attention has turned to broader economic concerns. We talk about how the internet has become largely organized through just a small number of quasi-monopolistic portals, how this affects the ways in which we gather information and decide whether to trust outside sources, and where things might go from here.

More here.

The End of Neoliberalism?

Jeff Sparrow in the Sydney Review of Books:

While all men might be equal in death, all sponsors must all be thanked in appropriately sized font. The memorial courtyard now contains an eternal flame, a donation from AGL, Santos and East Australian Pipelines. The gas for the eternal flame is ‘generously’ provided by Origin Energy under a sponsorship agreement. The gas industry’s ‘sacrifice’ in funding a tiny fraction of the local cost of the Australian War Memorial receives far more prominence than the names of Australian who gave their lives for our country. Lest we forget our sponsors. … While the irony of sponsorship by the oil industry, a fuel over which so many wars were fought in the twentieth century, might be missed by some, surely no one could miss the irony of BAE Systems, Lockheed Martin, Thales and other weapons manufacturers sponsoring the Australian War Memorial.

That striking passage comes from Richard Denniss’ new book Dead Right: how neoliberalism ate itself and what comes next. For Denniss, the evolution of the Australian War Memorial into a giant billboard illustrates the logic of neoliberalism, something that, he says, ‘has wounded our national identity, bled our national confidence, caused paralysis in our parliaments and is eating away at the identity of those on the right of Australian politics’.

Certainly, Lockheed Martin’s involvement with an institution purportedly commemorating battlefield deaths represents a particular crass commercialism, an unapologetic assertion of corporate interests over human sensibilities. Yet does that make it neoliberal?

More here.

The Gloriously Understated Career of Elaine Stritch

Alexandra Jacobs at Lit Hub:

But by far the most affecting performance came toward the event’s end, when the lights dimmed and an image of Stritch herself materialized on a big screen, like a glamorous ghost, in what might have been called her prime had she not so forcefully redefined that term. Wearing an ensemble of white blouse and black tights cribbed from Judy Garland’s famous “Get Happy” sequence but carried off even more effectively with her long, slim legs, she began the Sondheim song “The Ladies Who Lunch,” from the landmark 1970 musical Company, which was for so many years her signature anthem.

The Stritch-specter inhabited the dark world of the lyrics completely: cocking her silvery blonde head at the camera, enunciating, clasping her manicured hands as if in prayer, raising and furrowing professionally arched eyebrows, grinning, winking, nodding, jabbing, giving the okay sign, beckoning, pumping a fist, clawing, and throwing both hands up in a V shape that seemed to signify equally victory and defeat.

more here.

My Teacher, Harold Bloom

Gary Saul Morson at The American Scholar:

The positive lesson was that the most important thing a teacher can convey is a deep love of literature and an understanding that it offers insights, wisdom, and experiences to be found nowhere else. Nothing could be further from Bloom than the usual ways in which most students are taught literature today. Most learn mechanics: let’s find symbols. Others are instructed to see the work as a mere document of its times. And many are taught to summon the author before the stern tribunal of contemporary beliefs so as to measure where she approached modern views and where she fell short. (Bloom was to name such criticism “the school of resentment.”) Each of these approaches places the critic in a position superior to great works, which makes it hard to see why it is worth the effort to read them. Bloom instructed us to do the opposite: presume that the poets are wiser than we are so we can immerse ourselves in their works and share in their insights. Then the considerable difficulty of reading Milton or Spencer or Shelley makes sense.

more here.

Doris Lessing and The Veld

Lara Feigel at The New Statesman:

The landscape of Lessing’s childhood – and her sense of being in exile from it afterwards – remained, I think, the key to her writing in the 40 books that eventually gained her a Nobel Prize. Her experience of the veld was crucial to her politics. She became a communist because she was outraged by the system of racial segregation known as the colour bar, oppressing the black people she heard playing the drums at night outside in the bush while her mother played Chopin on the piano. And the veld was also crucial to her life as a feminist. After roaming freely as a child, sometimes pausing to shoot guinea fowl, she didn’t understand the conventions governing women’s lives in the city. Living in the Southern Rhodesian capital of Salisbury (now Harare), she found the nuclear family unbearably claustrophobic and longed to escape a social world that restricted the independence of women. And so, in 1942, aged 23, she abandoned her marriage, leaving behind two children.

Looking back on Lessing now, a hundred years after her birth, it’s the freedom with which she thought and acted for herself that makes her so enticing. This was the freedom to leave her first marriage (“I would have had to live at odds with myself, riven, hating what I was part of, for years”) and then to have a new child with her second husband, Gottfried Lessing, though she knew they were going to split up.

more here.

fascinating study of why we misread those we don’t know

Andrew Anthony in The Guardian:

Some years and several books ago, the New Yorker journalist Malcolm Gladwell moved from being a talented writer to a cultural phenomenon. He has practically invented a genre of nonfiction writing: the finely turned counterintuitive narrative underpinned by social science studies. Or if not the inventor then someone so closely associated with the form that it could fall under the title of Gladwellian.

His latest book, Talking to Strangers, is a typically roundabout exploration of the assumptions and mistakes we make when dealing with people we don’t know. If that sounds like a rather vague area of study, that’s because in many respects it is – there are all manner of definitional and cultural issues through which Gladwell boldly navigates a rather convenient path. But in doing so he crafts a compelling story, stopping off at prewar appeasement, paedophilia, espionage, the TV show Friends, the Amanda Knox and Bernie Madoff cases, suicide and Sylvia Plath, torture and Khalid Sheikh Mohammed, before coming to a somewhat pat conclusion. The tale begins with Sandra Bland, the African American woman who in July 2015 was stopped by a traffic cop in a small Texas town. She was just about to begin a job at Prairie View A&M University, when a police car accelerated up behind her. Doing what almost all of us would have done, she moved aside to let the car pass. And just like most of us in that situation, she didn’t bother indicating. It was on that technicality that the cop, Brian Encinia, ordered her to pull over.

More here.

Why Mammalian Brains are Geared Toward Kindness

Patricia Churchland in The Scientist:

Three myths about morality remain alluring: only humans act on moral emotions, moral precepts are divine in origin, and learning to behave morally goes against our thoroughly selfish nature. Converging data from many sciences, including ethology, anthropology, genetics, and neuroscience, have challenged all three of these myths. First, self-sacrifice, given the pressing needs of close kin or conspecifics to whom they are attached, has been documented in many mammalian species—wolves, marmosets, dolphins, and even rodents. Birds display it too. In sharp contrast, reptiles show no hint of this impulse.

Second, until very recently, hominins lived in small groups with robust social practices fostering well-being and survival in a wide range of ecologies. The idea of a divine lawgiver likely played no part in their moral practices for some two million years, emerging only with the advent of agriculture and larger communities where not everyone knew everyone else. The divine lawgiver idea is still absent from some large-scale religions, such as Confucianism and Buddhism. Third, it is part of our genetic heritage to care for kith and kin. Although self-sacrifice is common in termites and bees, the altruistic behavior of mammals and birds is vastly more flexible, variable, and farsighted. Attachment to others, mediated by powerful brain hormones, is the biological platform for morality. As I write in my new book, Conscience: “Between them, the circuitry supporting sociality and self-care and the circuitry for internalizing social norms create what we call conscience. In this sense, your conscience is a brain construct, whereby your instincts for caring, for self and others, are channeled into specific behaviors through development, imitation, and learning.”

More here.

Sunday, October 20, 2019

On Harold Bloom

William Flesch and Marco Roth in n + 1:

Like other teachers and sages I’d known and apprenticed myself to for seasons of my life, Bloom performed, but what he didn’t perform was pedagogy or teaching, not for himself and not for us. He just did readings, in the Bloom way, which was an ongoing drama, in words, between the work at hand and the absent works, lines and phrases that the work had brought itself into being from. This isn’t the same as watered down “intertextuality” or “influence studies” or, god forbid, some kind of seminar or salon-like conversation. It wasn’t “New Critical” thing-in-itself close reading, because poems weren’t things in themselves, they were living subjects, and as full of contradictions and private dramas and unconscious desires and hauntings as any other. While some critics thought about the “political unconscious” and others of just the human unconscious, Bloom found a way to surface the poetic unconscious.

He would sit there, channeling, almost always quoting from memory and at the speed of memory, a few teasing questions to set himself off and running. And he would run nonstop until the doctor-mandated “break,” when he might shift his corpulence, button up the shirt and shuffle unaided for water, or sit mopping his brow, or quiet, eyes closed, returning into silence. He leaked humanity.

More here.

Doubting death: how our brains shield us from mortal truth

Ian Sample in The Guardian:

Warning: this story is about death. You might want to click away now.

That’s because, researchers say, our brains do their best to keep us from dwelling on our inevitable demise. A study found that the brain shields us from existential fear by categorising death as an unfortunate event that only befalls other people.

“The brain does not accept that death is related to us,” said Yair Dor-Ziderman, at Bar Ilan University in Israel. “We have this primal mechanism that means when the brain gets information that links self to death, something tells us it’s not reliable, so we shouldn’t believe it.”

Being shielded from thoughts of our future death could be crucial for us to live in the present. The protection may switch on in early life as our minds develop and we realise death comes to us all.

More here.

American ‘Freedom Man’ is Made of Straw

Andrew J. Bacevich in The American Conservative:

Is a penchant for moral posturing part of a newspaper columnist’s job description? Sometimes it seems so. But if there were a prize for self-indulgent journalistic garment renting, Bret Stephens of The New York Times would certainly retire the trophy. 

To introduce a recent reflection on “the global lesson from the regional catastrophe that is Donald Trump’s retreat in Syria,” Stephens begins with a warm-and-fuzzy parable. “The time is the early 1980s,” he writes.

The place is the South China Sea. A sailor aboard the U.S.S. Midway, an aircraft carrier, spots a leaky boat jammed with people fleeing tyranny in Indochina. As he helps bring the desperate refugees to safety, one of them calls out: “Hello, American sailor — Hello, Freedom Man.”

Today, alas, Freedom Man has become “a fair-weather friend,” according to Stephens. Thanks to President Trump, America can no longer be trusted. And “the idealism that stormed Normandy, fed Europe, democratized Japan, and kept West Berlin free belongs to an increasingly remote past.”

How I wish that this litany of good deeds accurately summarized U.S. history in the decades since American idealism charged ashore at Omaha Beach. But wishing won’t make it so—unless, perhaps, you make your living as a newspaper columnist.

More here.

Patricia S. Churchland: The Nature of Moral Motivation

Patricia S. Churchland at Edge:

The question that I’ve been perplexed by for a long time has to do with moral motivation. Where does it come from? Is moral motivation unique to the human animal or are there others? It’s clear at this point that moral motivation is part of what we are genetically equipped with, and that we share this with mammals, in general, and birds. In the case of humans, our moral behavior is more complex, which is probably because we have bigger brains. We have more neurons than, say, a chimpanzee, a mouse, or a rat, but we have all the same structures. There is no special structure for morality tucked in there.

Part of what we want to know has to do with the nature of the wiring that supports moral motivation. We know a little bit about it, namely that it involves important neurochemicals like oxytocin and vasopressin. It also involves the hormones that have to do with pleasure, endocannabinoids and the endogenous opioids. That’s an important part of the story. The details are by and large missing. And what I would love to know, of course, is much more about the details.

More here.

Inside Aspen: the mountain retreat for the liberal elite

Linda Kinstler in 1843 Magazine:

The idea for the Aspen Institute first emerged after the second world war. In 1949 Walter Paepcke, a Chicago businessman, planned a bicentennial celebration of the life of Goethe. Paepcke and his wife, Elizabeth, chose Aspen because it was both beautiful and easily accessible from either coast. The couple felt there was an “urgent need” to understand Goethe’s thought: the world, still recovering from the war, had been cleft in half by the ideological battle between communism and capitalism. The Paepckes saw Goethe as a prime advocate of the underlying unity of mankind. He also worried about the corrosive effects of rapidly proliferating wealth. The Paepckes imagined that Aspen could become an “American Athens”, educating an upper-crust elite hungry for spiritual sustenance in the newly ascendant nation. Such work was vital “if the people of America and other nations are to strengthen their will for decency, ethical conduct and morality in a modern world”. Herbert Hoover, the former president, was named honorary chairman; Thomas Mann joined the board of directors.

In 1950 the Aspen Institute for Humanistic Studies was founded as a place of moral instruction for the “power elite”. The Paepckes didn’t want their creation to be merely a think-tank dedicated to policymakers. Nor were they interested in emulating business schools. They wanted to shape leaders, not merely improve managers. Back then, Aspen’s version of inclusivity meant inviting the men in suits. The new curriculum was modelled on what was known as the “Fat Man’s Great Books Class”, which Mortimer Adler, a philosopher who co-founded the Aspen Institute, had run in wartime Chicago exclusively for executives. The idea was that if thinkers and businessmen were forced into the same room they’d be cured of their mutual suspicion and “join together to supplant the vulgarity and aimlessness of American life”. Through encounters with the classics, executives would learn to restrain the worst excesses of capitalism and politicians would be able to draw on the wisdom of the ages as they reached their decisions. The “Aspen method” was born.

More here.

How evolution builds genes from scratch

Adam Levy in Nature:

In the depths of winter, water temperatures in the ice-covered Arctic Ocean can sink below zero. That’s cold enough to freeze many fish, but the conditions don’t trouble the cod. A protein in its blood and tissues binds to tiny ice crystals and stops them from growing. Where codfish got this talent was a puzzle that evolutionary biologist Helle Tessand Baalsrud wanted to solve. She and her team at the University of Oslo searched the genomes of the Atlantic cod (Gadus morhua) and several of its closest relatives, thinking they would track down the cousins of the antifreeze gene. None showed up. Baalsrud, who at the time was a new parent, worried that her lack of sleep was causing her to miss something obvious.

But then she stumbled on studies suggesting that genes do not always evolve from existing ones, as biologists long supposed. Instead, some are fashioned from desolate stretches of the genome that do not code for any functional molecules. When she looked back at the fish genomes, she saw hints this might be the case: the antifreeze protein — essential to the cod’s survival — had seemingly been built from scratch1. The cod is in good company. In the past five years, researchers have found numerous signs of these newly minted ‘de novo’ genes in every lineage they have surveyed. These include model organisms such as fruit flies and mice, important crop plants and humans; some of the genes are expressed in brain and testicular tissue, others in various cancers.

More here.