reading dante

4a888c7a-97c8-11e7-8c3c-cb45202c3d594Rowan Williams at the TLS:

So this is a satisfying and enormously suggestive book, arguing not the obvious point that Dante’s poetry is consistently inflected by theological concerns (which is like saying that it’s written in Italian), but a more radical thesis – that the poem seeks to enact its subject matter. It does so, Montemaggi contends, not only by inviting us into the mystery of divine love in general terms, but by two very specific kinds of challenge. We are invited to ask ourselves as readers what it would mean for us to become signifiers of God as Dante claims to be, finite agents of an infinite authorship; and we are reminded – in the light of this – that we as readers are equipped to make a difference to the author of the poem – and, indeed, to those he writes about insofar as they are in need of our solidarity and prayer. And this recasts entirely the way in which we look for theology in the work. These are not questions about “content”. Montemaggi argues pro­vo­catively that there can be no “doctrine or content” in the Commedia within the metaphysical framework that Dante assumes. The poem is not a versification of doctrinal propositions, but an attempt to allow the being of the Christian God to become transparent and actively transformative in the words recited and read or heard. The poem is an incarnation, remotely comparable to that focal and unique coincidence of finite and infinite action which is Christ’s life. The reader may or may not accept the terms of the relationships pre­supposed in the poem (namely the relationships between God and poet, poet and reader, reader and fellow believer or needy soul), but an agnostic or otherwise detached reader will not understand the poem without grasping that this is the sort of thing it is.

more here.



Bacteria Use Brainlike Bursts of Electricity to Communicate

Gabriel Popkin in Quanta:

ScreenHunter_2820 Sep. 14 15.35Bacteria have an unfortunate — and inaccurate — public image as isolated cells twiddling about on microscope slides. The more that scientists learn about bacteria, however, the more they see that this hermitlike reputation is deeply misleading, like trying to understand human behavior without referring to cities, laws or speech. “People were treating bacteria as … solitary organisms that live by themselves,” said Gürol Süel, a biophysicist at the University of California, San Diego. “In fact, most bacteria in nature appear to reside in very dense communities.”

The preferred form of community for bacteria seems to be the biofilm. On teeth, on pipes, on rocks and in the ocean, microbes glom together by the billions and build sticky organic superstructures around themselves. In these films, bacteria can divide labor: Exterior cells may fend off threats, while interior cells produce food. And like humans, who have succeeded in large part by cooperating with each other, bacteria thrive in communities. Antibiotics that easily dispatch free-swimming cells often prove useless against the same types of cells when they’ve hunkered down in a film.

As in all communities, cohabiting bacteria need ways to exchange messages. Biologists have known for decades that bacteria can use chemical cues to coordinate their behavior. The best-known example, elucidated by Bonnie Bassler of Princeton University and others, is quorum sensing, a process by which bacteria extrude signaling molecules until a high enough concentration triggers cells to form a biofilm or initiate some other collective behavior.

But Süel and other scientists are now finding that bacteria in biofilms can also talk to one another electrically.

More here.

Some Countries Are Much Richer Than Others. Is That Unjust?

Jonny Anomaly and Hrishikesh Joshi in Quillette:

When we look around the world and observe the massive wealth disparities between citizens in rich and poor countries, many of us are apt to conclude that the differences must have arisen because of colonialism, imperial warfare, or theft of raw materials like gold or oil. Of course, all of these things have happened at various points in time, and they can arguably explain some variation in the standard of living. Colonialism can be especially destructive of institutions that support peace and commerce. But a recent article by the philosopher Dan Moller casts doubt on the view that injustices like these can explain much of the observable differences.

Instead, Moller musters economic data to suggest that blatant injustices barely show up in the overall trajectory of economic growth in most countries over long periods of time. Specifically, Moller appeals to “the Great Divergence,” illustrated by graphs like this:

The basic idea is that rich countries got rich by pulling away from poor countries, not by making other countries poor. For the longest stretch of human history, most people lived in what we would today consider to be extreme poverty. However around the 18th century, some European countries started to diverge away from this low baseline. Eventually countries like Japan followed suit, and more recently, Hong Kong and China. On the other hand, today’s poor countries did not become poor for the most part (Zimbabwe and Venezuela are good counterexamples). Rather they remained poor.

Not surprisingly for economists, the main explanation for the Great Divergence is trade, fostered in part by favorable social and political institutions. This may seem obvious to those who understand that trade is a positive sum game, and that there are exponential gains from trade as markets expand and the division of labor becomes more fine-grained. The problem is that most philosophers who write about global poverty are convinced otherwise. They think that people in wealthy countries are in some sense responsible for poverty in less developed countries, and that we therefore have an obligation to do something about it.

More here.

Rushdie’s Domus Aurea: “The Golden House”

Shehryar Fazli in the Los Angeles Review of Books:

ThegoldenhouseA Youtube search of Salman Rushdie seemingly returns fresh hits by the week, of some new lecture, interview, or panel discussion. It’s worth taking a moment to appreciate that this indispensable writer, who spent an earlier decade under a state-sanctioned death threat, is able to share his literary and cultural preoccupations as openly and frequently as he does. But ubiquity in public life can be risky for a novelist. In Rushdie’s latest, The Golden House, we encounter so many of those preoccupations that it often reads like the eponymous Goldens are less a family than an extension of a conversation we’ve been having with the author for years.

Nero Golden and his three sons flee to the United States after losing Nero’s wife to the November 2008 Bombay (or Mumbai) terrorist attacks by the Pakistan-based Lashkar-e-Tayyaba, which claimed hundreds of lives. They arrive in Manhattan on the day of Barack Obama’s inauguration. We learn early that Nero had more dubious reasons for leaving India, where he clearly kept shady company. Classic noir, where the forces threatening the protagonist’s existence at home ultimately reappear and continue to do so in his new life. In this case, the protagonist hides not in an obscure town but in the world’s capital.

More here.

The Big Data revolution can revive the planned economy

Http-%2F%2Fcom.ft.imagepublish.prod-us.s3.amazonaws

John Thornhill in the FT:

[I]n his book Homo Deus, the Israeli historian Yuval Noah Harari suggests a more prosaic reason: planned economies (and authoritarian regimes) are rubbish at processing data. “Capitalism won the cold war because distributed data-processing works better than centralised data-processing,” he wrote.

How could any central planner sitting in Gosplan’s offices in Moscow hope to understand all the moving parts of the Soviet economy across 11 time zones?

The lunacy of that system was best explained to me by Otto Latsis, the late Russian economic journalist. He recounted how he once visited a Soviet roof tile manufacturing company, which had big expansion plans. He discovered that all the investment was being sunk into a separate plant to destroy discarded, imperfect tiles to stop them being sold on the black market by the factory’s workers.

As with most Soviet factories, the planners had set demanding targets for the new plant, focusing on quantity rather than quality, creating perverse incentives to smash up near-perfect tiles. It is hard to think of a starker example of the value destruction that wrecked the Soviet economy.

But the explosion of data in our modern world could — at least in theory — inform far better managerial decisions and reduce the information imbalances between a planned and a market economy. Central planners are rapidly acquiring the tools to process data a lot more effectively.

More here.

Losing and Gaining Public Goods

Publicgoods1

K.Sabeel Rahman in Boston Review:

The clash over health care is the most glaring example of a more widespread battle over the meaning and importance of public goods: what they are, how they ought to be provided—and to whom. The question of whether to privatize and deregulate, or to restore—and even expand—public provision is at the heart of many contemporary political, economic, and moral debates. At the federal level, the question over public provision manifests in disputes over privatizing education or slashing funds for affordable housing. On a more local level, the poisonous water of Flint, Michigan, exemplifies the toll of the larger trend of budget-cutting and privatizing vital public services.

In economic terms, public goods are defined as being nonrivalrous and nonexcludable—meaning that one person’s consumption does not preclude another’s, and that it is difficult (or impossible) to prevent people from consuming the good without paying for it. Classic examples are light and air. A second, more conventional understanding of public goods focuses on the economics of production. Goods that have high sunk costs and increasing returns to scale are likely to be underprovided by ordinary market competition. Think cable TV and landlines: the massive expense involved in laying down wiring on a national scale discourages private investment, but the benefits of a national network increase as the network grows. These conventional public goods are therefore seen as a proper domain for governmental provision.

But the battles over health care, education, and other goods underway today express a very different view of public goods, one grounded not in economic terms of efficiency and production, but rather in moral and political concepts.

More here. Also see responses from Jacob Levy, Joshua Cohen, Lauren Jacobs and others.

The Misguided Focus on 1619 as the Beginning of Slavery in the U.S. Damages Our Understanding of American History

Michael Guasco in Smithsonian:

SlaveIn 1619, “20. and odd Negroes” arrived off the coast of Virginia, where they were “bought for victualle” by labor-hungry English colonists. The story of these captive Africans has set the stage for countless scholars and teachers interested in telling the story of slavery in English North America. Unfortunately, 1619 is not the best place to begin a meaningful inquiry into the history of African peoples in America. Certainly, there is a story to be told that begins in 1619, but it is neither well-suited to help us understand slavery as an institution nor to help us better grasp the complicated place of African peoples in the early modern Atlantic world. For too long, the focus on 1619 has led the general public and scholars alike to ignore more important issues and, worse, to silently accept unquestioned assumptions that continue to impact us in remarkably consequential ways. As a historical signifier, 1619 may be more insidious than instructive. In 1619, “20. and odd Negroes” arrived off the coast of Virginia, where they were “bought for victualle” by labor-hungry English colonists. The story of these captive Africans has set the stage for countless scholars and teachers interested in telling the story of slavery in English North America. Unfortunately, 1619 is not the best place to begin a meaningful inquiry into the history of African peoples in America. Certainly, there is a story to be told that begins in 1619, but it is neither well-suited to help us understand slavery as an institution nor to help us better grasp the complicated place of African peoples in the early modern Atlantic world. For too long, the focus on 1619 has led the general public and scholars alike to ignore more important issues and, worse, to silently accept unquestioned assumptions that continue to impact us in remarkably consequential ways. As a historical signifier, 1619 may be more insidious than instructive.

…Telling the story of 1619 as an “English” story also ignores the entirely transnational nature of the early modern Atlantic world and the way competing European powers collectively facilitated racial slavery even as they disagreed about and fought over almost everything else. From the early 1500s forward, the Portuguese, Spanish, English, French, Dutch and others fought to control the resources of the emerging transatlantic world and worked together to facilitate the dislocation of the indigenous peoples of Africa and the Americas. As historian John Thornton has shown us, the African men and women who appeared almost as if by chance in Virginia in 1619 were there because of a chain of events involving Portugal, Spain, the Netherlands and England. Virginia was part of the story, but it was a blip on the radar screen.

More here.

The Problem with the Mutation-Centric View of Cancer

Brian Gallagher in Nautilus:

MutantTo better understand and treat cancer, physicians need to stop oversimplifying its causes. Cancer results not solely from genetic mutations but by adapting to and thriving in micro-environments in the body. That’s the point of view of James DeGregori, a professor in the Department of Biochemistry and Molecular Genetics at the University of Colorado School of Medicine. In a recent Cancer Research paper, DeGregori took a trio of researchers—Cristian Tomasetti, Lu Li, and Bert Vogelstein—to task for their assessment of cancer risk. “Cancers,” they wrote in Science, “are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replications errors.” The latter, they concluded “are responsible for two-thirds of the mutations in human cancers.” Their model of cancer risk is “mutation-centric,” DeGregori said in a recent interview with Nautilus. “What I’m arguing is that they’re modeling a risk by not factoring in how the causes of cancer, like aging or smoking, impacts selection for mutations by impacting tissue micro-environments,” he said. “Their assessments of risks will be wrong because they’re basically missing a major important factor.” In our conversation, DeGregori expanded on how a renewed focus on micro-environments and Darwinian evolutionary pressures can benefit cancer research.

How should we study the origins of cancer?

My lab has been researching the origins of cancers for the last 15 to 17 years. We’re trying to understand cancer from an evolutionary viewpoint, understanding how it evolves. A lot of people think about cancer from an evolutionary viewpoint. But what sets us apart is that we’ve really come to understand cancer by the context these cells find themselves in.

What’s an example of such a context?

While other people will think about aging as the time for mutations to cause advantageous events [for cancer] in cells, we see aging as a very different process. It’s not about the time you get mutations—you get many mutations when you’re young. It's the tissue environment for the cells that changes dramatically as we age. Those new tissue environments basically stimulate the evolution. So the evolution isn’t a process that’s limited by the mutation so much as a process that is limited by micro-environment changes.

More here.

Wednesday, September 13, 2017

Our imaginative life today has access to the pre-linguistic, ancestral mind

Stephen Asma in Aeon:

ScreenHunter_2820 Sep. 13 20.18Imagination is intrinsic to our inner lives. You could even say that it makes up a ‘second universe’ inside our heads. We invent animals and events that don’t exist, we rerun history with alternative outcomes, we envision social and moral utopias, we revel in fantasy art, and we meditate both on what we could have been and on what we might become. Animators such as Hayao Miyazaki, Walt Disney and the people at Pixar Studios are masterful at imagination, but they’re only creating a public version of our everyday private lives. If you could see the fantastic mash-up inside the mind of the average five-year-old, then Star Wars and Harry Potter would seem sober and dull. So, why is there so little analysis of imagination, by philosophers, psychologists and scientists?

Apart from some cryptic passages in Aristotle and Kant, philosophy has said almost nothing about imagination, and what it says seems thoroughly disconnected from the creativity that artists and laypeople call ‘imaginative’.

Aristotle described the imagination as a faculty in humans (and most other animals) that produces, stores and recalls the images we use in a variety of mental activities. Even our sleep is energised by the dreams of our involuntary imagination. Immanuel Kant saw the imagination as a synthesiser of senses and understanding. Although there are many differences between Aristotle’s and Kant’s philosophies, Kant agreed that the imagination is an unconscious synthesising faculty that pulls together sense perceptions and binds them into coherent representations with universal conceptual dimensions. The imagination is a mental faculty that mediates between the particulars of the senses – say, ‘luminous blue colours’ – and the universals of our conceptual understanding – say, the judgment that ‘Marc Chagall’s blue America Windows (1977) is beautiful.’ Imagination, according to these philosophers, is a kind of cognition, or more accurately a prerequisite ‘bundling process’ prior to cognition. Its work is unconscious and it paves the way for knowledge, but is not abstract or linguistic enough to stand as actual knowledge.

More here.

Unspeakable Realities Block Universal Health Coverage In America

960x0

Chris Labb in Forbes [h/t: Houston Spencer]:

When it seems like people are voting against their interests, I have probably failed to understand their interests. We cannot begin to understand Election 2016 until we acknowledge the power and reach of socialism for white people.

Americans with good jobs live in a socialist welfare state more generous, cushioned and expensive to the public than any in Europe. Like a European system, we pool our resources to share the burden of catastrophic expenses, but unlike European models, our approach doesn’t cover everyone.

Like most of my neighbors I have a good job in the private sector. Ask my neighbors about the cost of the welfare programs they enjoy and you will be greeted by baffled stares. All that we have is “earned” and we perceive no need for government support. Nevertheless, taxpayers fund our retirement saving, health insurance, primary, secondary, and advanced education, daycare, commuter costs, and even our mortgages at a staggering public cost. Socialism for white people is all-enveloping, benevolent, invisible, and insulated by the nasty, deceptive notion that we have earned our benefits by our own hand.

My family’s generous health insurance costs about $20,000 a year, of which we pay only $4,000 in premiums. The rest is subsidized by taxpayers. You read that right. Like virtually everyone else on my block who isn’t old enough for Medicare or employed by the government, my family is covered by private health insurance subsidized by taxpayers at a stupendous public cost. Well over 90% of white households earning over the white median income (about $75,000) carried health insurance even before the Affordable Care Act. White socialism is nice if you can get it.

More here.

HOW I GOT FIRED FROM A D.C. THINK TANK FOR FIGHTING AGAINST THE POWER OF GOOGLE

Google-tech-company-monopoly-window-1504125301-article-header

Zephyr Teachout in The Intercept:

Lynn and his team pushed Democrats to embrace anti-monopoly as a serious policy issue, catalyzed a public debate about Amazon’s power, and spearheaded an intellectual revolution around antitrust enforcement to overturn the consumer welfare standard developed by Robert Bork. Even those who disagreed with Open Markets never questioned their integrity. As Cornell Law School Professor James Grimmelmann tweeted yesterday, “something unsettling and dangerous is happening in tech markets.” And while Grimmelmann often disagreed with Lynn and Open Markets, he added that “unless teams like Open Markets get the support and freedom they need to keep thinking and writing about it, NO ONE WILL.”

Apparently these ideas threatened Google.

In June, when the European Union fined Google $2.7 billion for abusing its dominant position to serve itself and quash competition, the Open Markets team put out a press statement that was entirely consistent with its longstanding position. It praised the EU’s action, and argued that American antitrust authorities should also look at Google’s use of its search power to leverage its influence in other markets.

New America’s leadership must have gotten an earful. Within 72 hours, New America’s president, Anne-Marie Slaughter, told Lynn that he — and all of us on the Open Markets team — had to leave. As the New York Times reported yesterday, Slaughter emailed Lynn to say that “the time has come for Open Markets and New America to part ways,” and the email accused Lynn of “imperiling the institution as a whole.” (After the Times story was published, Slaughter tweeted that the article was “false,” though she later added, “facts are largely right, but quotes are taken way out of context and interpretation is wrong.”)

For years, Google has provided funding to New America as part of its philanthropic giving. According to the Times, more than $21 million came to New America from Google and from Eric Schmidt and his family foundation (Schmidt is the executive chairman of Google’s parent company, Alphabet). One would hope that Google would provide those funds in the best tradition of free thought: without ideological strings attached. But apparently the smallest dissent is too much to bear.

More here.

The Case Against Civilization

170918_r30523John Lanchester at The New Yorker:

It turns out that hunting and gathering is a good way to live. A study from 1966 found that it took a Ju/’hoansi only about seventeen hours a week, on average, to find an adequate supply of food; another nineteen hours were spent on domestic activities and chores. The average caloric intake of the hunter-gatherers was twenty-three hundred a day, close to the recommended amount. At the time these figures were first established, a comparable week in the United States involved forty hours of work and thirty-six of domestic labor. Ju/’hoansi do not accumulate surpluses; they get all the food they need, and then stop. They exhibit what Suzman calls “an unyielding confidence” that their environment will provide for their needs.

The web of food sources that the hunting-and-gathering Ju/’hoansi use is, exactly as Scott argues for Neolithic people, a complex one, with a wide range of animal protein, including porcupines, kudu, wildebeests, and elephants, and a hundred and twenty-five edible plant species, with different seasonal cycles, ecological niches, and responses to weather fluctuations. Hunter-gatherers need not only an unwritten almanac of dietary knowledge but what Scott calls a “library of almanacs.” As he suggests, the step-down in complexity between hunting and gathering and domesticated agriculture is as big as the step-down between domesticated agriculture and routine assembly work on a production line.

more here.

denis johnson, the poet

Denis-JohnsonJay Deshpande at Poetry Magazine:

A Denis Johnson sonnet is worth reading for sheer pyrotechnics: watching him build and then maneuver his way through rhyme schemes and a muscular, variable use of iambic pentameter is thrilling in its own right. What’s more, his sonnets spin extremities of human emotion into powerful, intensified moments. Like many of the great masters of the sonnet, he mustered rich rhetoric to make his point and turned the particulars of a romantic profession into a gutting, universal truth that far exceeds the bounds of the love poem. (Take the sonnet “Sway,” which presents the fundamental formula of love: “the story that begins / I did not know who she was / and ends I did not know who she was.” His great innovation, though, was making the sonnet frame a moment of awe that surpasses understanding.

Consider Johnson’s poem “Heat.” This sonnet, published in The Incognito Lounge, wraps an erotic scene with a sudden lyric profession of wonder and fury. It calls upon Johnson’s surprising narrative economy, using only the briefest fragment to evoke an intensified post-coital tableau:

Here in the electric dusk your naked lover
tips the glass high and the ice cubes fall against her teeth.

more here.

Rope, Birdman, and the Economy of the Single-Shot Film

Shechtman-Fig-11-600x325Anna Shechtman at nonsite:

As a number of scholars have argued, the final film product—a semblance of Brandon and Phillip’s crime in real-time—offers the viewer a sensation of “presence” conventionally impeded by montage in cinema.17 That such a sense of presence could emerge from within cinema at all would have shocked Georg Lukács, who, comparing the then-nascent cinematic medium to theater in 1913, wrote that “the stage is absolute presence, [but] the absence of ‘presence’ is the essential characteristic of the ‘cinema.’”18 He continued, “In a word: the basic law of connection for stage and drama is inexorable necessity; for the ‘cinema,’ its possibility is restricted by nothing … ‘Everything is possible’: that is the worldview of the ‘cinema.’”19 In Rope, however, Hitchcock imposes artificial limits on the cinema’s “endless possibilities.” He imposes the limits of theater. Rope’s camera never probes the perspective of the film’s protagonists (one never gets into Brandon or Phillip’s eyes); rather, Rope’s single, mobile camera assumes one unidentified point-of-view. It becomes an anonymous witness to Phillip and Brandon’s crime and the perverted dinner party that follows. In this sense, the film functions as a subjective long-take, but seemingly without a subject. Like theater-viewers, one sees only what this limited camera-witness sees, peering into the kitchen and overhearing a conversation in the next room. There is no reverse-shot that would attribute this 80-minute visual “quote” to a character because, as in theater, the viewer is it.20

This mode of filmmaking—and the phenomenology of presence that it ostensibly compels for actors and viewers alike—anticipates Bazin’s remarks about the ontology of live television, which he wrote in 1954. For Bazin, the live-ness of live television (then the dominant mode of broadcast) offered the medium “a particular way of dealing with action: more freely than theater, but less varied than cinema.

more here.

Rohingya and the Myth of Buddhist Tolerance

M. Reza Pirbhai in Counterpunch:

Screen-Shot-2017-09-12-at-7_57_04-PMWhen old and young are systematically rounded up and shot. When women are gang raped and their babies thrown into waterways to drown. When their homes and businesses are burned. When all the atrocities of ethnic cleansing are plain to see, international law leaps into action. Global bodies and their constituent states work to simultaneously put an end to the atrocities, provide refuge for survivors and bring perpetrators to book, no matter the identity of the offender or the victim. Or so we are told. For as the on-going slaughter and displacement of Myanmar’s Rohingya Muslims reveals, international law is not so blind.

Since their citizenship rights have been progressively revoked between the 1940s and ‘80s, thousands of Rohingya men, women and children have been subjected to murder and rape, their villages have been raised to the ground and more than a million have fled to neighboring countries without much protest from the world beyond. Even the UN’s late attempts to investigate the most recent barbarities have fallen short of constituting a full Commission of Inquiry and independent investigators have been blocked from entering Myanmar by the Buddhist-led government of Nobel Peace Prize winner Aung Sang Suu Kyi. “Just imagine, for a minute,” Columbia University’s Hamid Dabashi urges in a recent article, “if it were Jews or Christians, or else the ‘peaceful Buddhists,’ who were the subjects of Muslim persecutions.” Given the attention Muslim violence ceaselessly garners, the reason behind the apparent lack of outrage to protect the Rohingya is clear to him: “Something in the liberal fabric of Euro-American imagination is cancerously callous. It does not see Muslims as complete human beings.”

More here.

How We Cope with the End of Nature

Stephen Marche in Nautilus:

FallsSolastalgia is the definitive disease of the 21st century but only a few even know its name. The symptoms include an underlying sense of loss, a vague sensation of being torn from the earth, a general out-of-placeness, homelessness without leaving home. You have probably felt it without knowing what it was. Solastalgia is the unease we inflict on ourselves as we create a world we don’t want to inhabit, a world stripped of nature. Nature is retreating, and not gradually. According to the World Wildlife Fund, over half of the world’s wild vertebrate species has disappeared over the past forty years. More than 290 million acres of North American grasslands have been converted to agriculture. At current growth rates, US development will reduce its forested regions by about 30 million acres by 2050. The amount of urban land in biodiversity hotspots is expected to increase by three times between 2000 and 2030. Deeper catastrophes, we know, are lurking. The North Pole has hovered near the freezing point this past winter. The partial breakdown of the Paris Accords, and the sudden spike in the consequences of climate change—icebergs the size of Delaware cracking off Antarctica, the bleaching of the Great Barrier Reef—are forcing us to face a hard truth. The future we are building is one with much less exposure to nature and vastly diminished biodiversity.

And this creates a new problem for us as a species: The experience of nature is integral to our wellbeing and, by destroying the earth, we are making ourselves sick. In 2003, Glenn Albrecht, then an environmental philosopher at the School of Environmental and Life Sciences at the University of Newcastle in Australia, coined the term solastalgia. Much like nostalgia, solastalgia is difficult to define with precision, but is nonetheless instantly recognizable: “Solastalgia,” Albrecht wrote, “is the pain or sickness caused by the loss of, or inability to derive solace from, the present state of one’s home environment. Solastalgia exists when there is recognition that the beloved place in which one resides is under assault.” The type of assault may vary. The force of the assault may vary. The loss and unease that follows in the wake of the assault do not.

Glenn Albrecht chose “solasta” as a new root word for two reasons. “Solasta” contains the sense both of “solace” and “desolation.” Where nostalgia describes a longing for another place and another time, solastalgia is a longing for the now as it should be, for nature when there’s no nature there.

More here.

Tuesday, September 12, 2017

Jeff Koons: Or, Who’s Liberating Whom?

Morgan Meis in The Easel:

Koonsimage1-1024x768

Koons has his defenders, but with works like Pink Panther and Michael Jackson and Bubbles, he has driven many in the art-critical establishment into what can only be called paroxysms of outrage. Jeff Koons’ recent retrospective at the Whitney Museum (2014) was another chance for the critics and academics to take their whacks. Jed Perl, in a piece for the New York Review of Books, summed up the feelings of many. Perl titled his piece, “The Cult of Jeff Koons.” Here’s the opening paragraph:

“Imagine the Jeff Koons retrospective at the Whitney Museum of American Art as the perfect storm. And at the center of the perfect storm there is a perfect vacuum. The storm is everything going on around Jeff Koons: the multimillion-dollar auction prices, the blue chip dealers, the hyperbolic claims of the critics, the adulation and the controversy and the public that quite naturally wants to know what all the fuss is about. The vacuum is the work itself, displayed on five of the six floors of the Whitney, a succession of pop culture trophies so emotionally dead that museumgoers appear a little dazed as they dutifully take out their iPhones and produce their selfies.”

If Perl is right (and he may well be) the only thing that is really interesting about Jeff Koons is the magnitude of the boondoggle. The question is how we square Perl’s contempt with, for instance, the following claim to be found at the Whitney Museum of American Art’s website: “Jeff Koons is widely regarded as one of the most important, influential, popular, and controversial artists of the postwar era.” There are, moreover, a number of important critics who have held this view, the best and most intelligent being philosopher and art critic Arthur Danto.

More here.

Gottlob Frege: The machine in the ghost

Ray Monk in Prospect:

Frege_2-e1505130435166By common consent, the three founders of the modern analytic tradition of philosophy are, in chronological order, Gottlob Frege, Bertrand Russell and Ludwig Wittgenstein. The biggest project in my professional life has been to write biographies of the second and third of these men. But of the three, it is Frege who is—100 years on from his retirement—held in the greatest esteem by the philosophers of today.

His essay “On Sense and Reference” (1892) offered a philosophical account of linguistic meaning that broke new ground in sophistication and rigour, and it is still required reading for anyone who wants to understand contemporary philosophy of language. It is scarcely an exaggeration to say that he invented modern logic: he developed the basic ideas (if not the symbols now in use) of predicate logic, considered by most analytic philosophers to be an essential tool of their trade and a required part of almost every philosophy undergraduate degree programme. His book The Foundations of Arithmetic (1884) is still hailed as a paradigm of the kind of crisp, rigorous prose to which every analytic philosopher should aspire.

Frege’s insights have been influential outside philosophy, in areas including cognitive science, linguistics and computer science. Among the public, however, he is almost completely unknown, especially when put beside Wittgenstein and Russell. Most people have some idea who Russell was. Many have seen clips of his frequent appearances on television, can picture his bird-like features crowned with his mane of white hair, and recognise his unique voice, high-pitched, precise and aristocratic in an impossibly old-fashioned way (one imagines that no-one has spoken like that since the Regency). Even better known is Wittgenstein, the subject of a Derek Jarman movie and several poems, whose name is dropped by journalists, novelists and playwrights, confident that their audiences will have some idea who he is. But Frege? How many people know anything about him?

More here.