The discovery of independent life beyond Earth would have deep philosophical implications for us

Tim Mulgan in Aeon:

ScreenHunter_2899 Dec. 09 20.29Suppose we woke up tomorrow to learn that extraterrestrial life had been discovered. What difference would that make? Set aside the extreme scenarios of popular fiction. The truth will probably be more mundane – not massive spaceships suddenly filling the sky but, instead, microorganisms found deep inside an ice-covered Moon, a non-random radio signal from a distant star system, or the ruins of a long-dead alien civilisation. What difference might those discoveries make? Would they strengthen or weaken our faith in God, or science, or humanity? Would they force us to re-evaluate the importance of our own lives, values and projects?

In academic philosophy today, an interest in extraterrestrial life is regarded with some suspicion. This is a historical anomaly. In Ancient Greece, Epicureans argued that every possible form of life must recur infinitely many times in an infinite universe. In the 17th, 18th and 19th centuries, as modern astronomy demonstrated that our Earth is just another planet and our Sun just another star, the default hypothesis among informed observers was that the Universe is filled with habitable planets and intelligent life. One principal argument for this ‘pluralism’ was philosophical or theological: God (or Nature) does nothing in vain, and therefore such a vast cosmos could not be home to only one small race of rational beings.

My goal here is to explore some unexpected implications of the discovery of extraterrestrial life, and my conclusions are very speculative: extraterrestrial life would lend non-decisive support to several interesting and controversial philosophical positions.

More here.

The Tragedy of Liberalism

Patrick J. Deneen in The Hedgehog Review:

DeneenAmerica is a nation in deep agreement and common belief. The proof lies, somewhat paradoxically, in the often tempestuous and increasingly acrimonious debate between the two main US political parties. The widening divide represented by this debate has, for many of us, defined the scope of our political views and the resultant differences for at least the past one hundred years. But even as we do tense and bruising battle, a deeper form of philosophical agreement reigns. As described by Louis Hartz in his 1955 book The Liberal Tradition in America, the nature of our debates themselves is defined within the framework of liberalism. That framework has seemingly expanded, but it is nonetheless bounded, in as much as the political debates of our time have pitted one variant of liberalism against another, which were given the labels “conservatism” and “liberalism” but which are better categorized as “classical liberalism” and “progressive liberalism.” While we have focused our attention on the growing differences between “classical” and “progressive,” we have been largely inattentive to the unifying nature of their shared liberalism.

While classical liberalism looks back to a liberalism achieved and lost—particularly the founding philosophy of America that stressed natural rights, limited government, and a relatively free and open market, “progressive” liberalism longs for a liberalism not yet achieved, one that strives to transcend the limitations of the past and even envisions a transformed humanity, its consciousness enlarged, practicing what Edward Bellamy called “the religion of solidarity.”1 As Richard Rorty envisioned in his aptly titled 1998 book Achieving Our Country, liberal democracy “is the principled means by which a more evolved form of humanity will come into existence.… Democratic humanity…has ‘more being’ than predemocratic humanity. The citizens of a [liberal] democratic, Whitmanesque society are able to create new, hitherto unimagined roles and goals for themselves.”

More here.

The Heroes of CRISPR

Eric S. Lander in Cell:

CRISOR-1Three years ago, scientists reported that CRISPR technology can enable precise and efficient genome editing in living eukaryotic cells. Since then, the method has taken the scientific community by storm, with thousands of labs using it for applications from biomedicine to agriculture. Yet, the preceding 20-year journey—the discovery of a strange microbial repeat sequence; its recognition as an adaptive immune system; its biological characterization; and its repurposing for genome engineering—remains little known. This Perspective aims to fill in this backstory—the history of ideas and the stories of pioneers—and draw lessons about the remarkable ecosystem underlying scientific discovery.

It’s hard to recall a revolution that has swept biology more swiftly than CRISPR. Just 3 years ago, scientists reported that the CRISPR system—an adaptive immune system used by microbes to defend themselves against invading viruses by recording and targeting their DNA sequences—could be repurposed into a simple and reliable technique for editing, in living cells, the genomes of mammals and other organisms. CRISPR was soon adapted for a vast range of applications—creating complex animal models of human-inherited diseases and cancers; performing genome-wide screens in human cells to pinpoint the genes underlying biological processes; turning specific genes on or off; and genetically modifying plants—and is being used in thousands of labs worldwide. The prospect that CRISPR might be used to modify the human germline has stimulated international debate.

If there are molecular biologists left who have not heard of CRISPR, I have not met them. Yet, if you ask scientists how this revolution came to pass, they often have no idea. The immunologist Sir Peter Medawar observed, ‘‘The history of science bores most scientists stiff’’ (Medawar, 1968). Indeed, scientists focus relentlessly on the future. Once a fact is firmly established, the circuitous path that led to its discovery is seen as a distraction.

Yet, the human stories behind scientific advances can teach us a lot about the miraculous ecosystem that drives biomedical progress—about the roles of serendipity and planning, of pure curiosity and practical application, of hypothesis-free and hypothesis-driven science, of individuals and teams, and of fresh perspectives and deep expertise.

More here.

Are you judging your decisions on their outcomes? The Resulting Fallacy Is Ruining Your Decisions

Stuart Firestein in Nautilus:

13835_51e2038e383ecfc953bf1ab5a0747c63Most poker players didn’t go to graduate school for cognitive linguistics. Then again, most poker players aren’t Annie Duke.

After pursuing a psychology Ph.D. on childhood language acquisition, Duke turned her skills to the poker table, where she has taken home over $4 million in lifetime earnings. For a time she was the leading female money winner in World Series of Poker history, and remains in the top five. She’s written two books on poker strategy, and next year will release a book called Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts.

In it, Duke parlays her experience with cards into general lessons about decision making that are relevant for all of us. If a well-reasoned decision leads to a negative outcome, was it the wrong decision? How do we distinguish between luck and skill? And how do we move beyond our cognitive biases?

Stuart Firestein, a professor of neuroscience at Columbia University, sat down with Duke in October to talk to her about life and poker.

How did you get into science?

From when I was very young I set out on an academic path. My parents were both teachers. My dad taught at a small private school in New England. My mother taught at the local public school until she had babies. (It was the ’60s, and that was the usual path for women then.) I grew up on the campus of the school and then went to Columbia. When I entered Columbia I thought I would follow in my father’s footsteps and major in English and go on to graduate school. In my family, it was really this idea of, “Where are you going to go to graduate school?” not “if.” I ended up double majoring in English and psychology. The whole time I was at Columbia, I worked in Barbara Landau’s lab as a research assistant. She was looking at first language acquisition, which was a topic I fell in love with—it’s what I ended up actually studying when I went to graduate school.

More here.

Ben Shapiro’s fans apparently think he is very smart; it is not clear why

Nathan J. Robinson in Current Affairs:

27877116392_3cedaf9b8e_k-1024x646It’s easy to laugh, as some of us do, at the phrase “conservative intellectual.” When the most prominent public spokesmen for the right’s ideas include Milo Yiannopoulos, Charles Murray, and Dinesh D’Souza, one might conclude that the movement does not have anything serious to offer beyond “Feminism is cancer,” “Black people are dumb,” and “Democrats are Nazis.” (Those are, as I understand it, the central intellectual contributions of Yiannopoulos, Murray, and D’Souza, respectively.)

But according to the New York Times, it would be a mistake to write off Conservative Thought so hastily. For we would be overlooking one crucial figure: Ben Shapiro. Shapiro, we are told, is “the cool kid’s philosopher, dissecting arguments with a lawyer’s skill and references to Aristotle.” The Times quotes praise of Shapiro as a “brilliant polemicist” and “principled gladiator,” a quick-witted man who “reads books,” and “takes apart arguments in ways that make the conservative conclusion seem utterly logical.” Shapiro is the “destroyer of weak arguments,” he “has been called the voice of the conservative millennial movement.” He is a genuine intellectual, a man who “does not attack unfairly, stoke anger for the sake of it, or mischaracterize his opponents’ positions.” He is principled: he deplores Trump, and cares about Truth. Shapiro’s personal mantra, “Facts don’t care about your feelings,” captures his approach: he’s passionate, but he believes in following reason rather than emotion. Shapiro, then, represents the best in contemporary conservative thinking. And if the cool kids have a philosopher, it is worth examining his philosophy in some depth.

I will confess, I had not spent much time listening to or reading Ben Shapiro before reading about him in the New York Times. That might be a damning sign of my own closed-mindedness: here I am, a person who considers himself intellectually serious, and I have written off the other side without even engaging with its strongest arguments. So I decided to spend a few wearying days trawling through the Shapiro oeuvre, listening to the speeches and radio shows and reading the columns and books. If Shapiro had arguments that Destroyed and Decimated the left, I wanted to make sure I heard them. I consider myself a bit of a leftist, and I like to know when I’ve been decimated.

More here.

Friday, December 8, 2017

Premodernism of the Future

Patrick Lee Miller in Quillette:

PremodernismModernism and Postmodernism are at an impasse. This was the conclusion of yesterday’s essay. Without its argument, though, you are unlikely to agree. Most people aware of this debate—whether in the hallways of academia, the online magazines, or the corridors of power—are partisans of one side or the other. For them, there is no impasse, only a conflict between the reasonable and the foolish, the duped and the woke. Most readers of this site favor modernism, and there are many reasons to do so. Yesterday’s essay catalogued the main ones, especially universal rights and empirical science. But it also presented some scientific reasoning about reason, showing the limits of the modernist approach, including science itself.

Yesterday’s essay began with Michael Aaron’s division of our culture wars into three camps: postmodernists, modernists, and traditionalists. After quickly knocking down a straw-man of traditionalism, Aaron reproduced the critiques of postmodern political excesses that are familiar to every reader of this site. Modernism was the winner by default. What he failed to consider, and in this failure he is not alone, are two points that need to be absorbed by champions of universal rights and empirical science. First, while postmodernism fails as a positive politics, it is still powerful as a critique of the blindspots of modernism. That was part of yesterday’s argument. And second, that there is more wisdom in “premodernism,” especially the philosophies of Greek antiquity, than is dreamt of in most accounts of our present crisis. This is the argument of today’s essay.

More here.

The Neuroscience of Changing Your Mind

Bret Stetka in Scientific American:

0479D8D4-BDD3-4E5D-9296138788D7D7A4Every day our brains grapple with various last-minute decisions. We adjust our gait to avoid a patch of ice; we exit to hit the rest stop; we switch to our backhand before thwacking a tennis ball.

Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.

The findings, published Thursday in Neuron, reveal reneging on an intended behavior involves coordinated cross talk between several brain regions. As a result, changing our minds even mere milliseconds after making a decision is often too late to alter a movement or behavior. Using functional magnetic resonance imaging—a technique that monitors brain activity in real time—the Johns Hopkins group found reversing a decision requires ultrafast communication between two specific zones within the prefrontal cortex and another nearby structure called the frontal eye field, an area involved in controlling eye movements and visual awareness.

More here.

The Truth about ‘Cultural Appropriation’

Kenan Malik in Art Review:

ScreenHunter_2923 Dec. 08 19.21Maqbool Fida Husain is perhaps India’s greatest artist of the twentieth century. His work linked ancient and modern traditions and helped transform Indian modernism. But not everyone appreciated Husain’s work. His depictions of Hindu deities, often naked, outraged Hindu nationalists who questioned his right, as someone of Muslim background, to depict figures sacred to Hindus, accusing him of ‘hurting religious feelings’. His home and gallery were ransacked, many of his paintings destroyed. He faced law suits, including ones for ‘promoting enmity between different groups’. The harassment spread beyond India’s borders. In 2006, London’s Asia House Gallery shut an exhibition of his work after protests and the defacement of two paintings. Husain, who died in 2011, was forced to live his last years in exile, in London and Qatar.

Were he still alive today, M.F. Husain’s Hindu critics might well be accusing him not of sacrilege but of ‘cultural appropriation’ – the ‘theft’ of images and ideas that truly belong to another culture and that he had no right to take without permission.

The idea of cultural appropriation has, in recent years, moved from being an abstruse academic and legal concept to a mainstream political issue. From Beyonce’s Bollywood outfits to Dana Schutz’s painting of Emmett Till, and from the recent controversy surrounding Sam Durant’s sculpture Scaffold (2012) to Omer Fast’s recreation of an old Chinatown storefront at James Cohan Gallery, New York, there is barely a week in which controversies over cultural appropriation are not in the headlines.

So, what is cultural appropriation and why has it become such a contentious issue?

More here.

on ‘Facing Gaia: Eight Lectures on the New Climate Regime’

0745684335John Tresch at Public Books:

In Facing Gaia: Eight Lectures on the New Climate Regime, Bruno Latour aims to reintroduce us to our own planet. The Earth emerges as a bizarre and unfamiliar presence, dimly glimpsed but exerting a colossal and uncertain pressure on all our actions. Though its unpredictable effects promise no meaning or redemption, this alien power forces our attention to the immediacy of terrestrial life.

Latour’s work has set the pace for science and technology studies since his ethnographies of laboratories in the 1980s and 90s; since We Have Never Been Modern, he has upended received wisdom about the bond between science and progress, challenged academic habits of critique, and inspired radical approaches to objects and ontologies across the social sciences and humanities. The concern for ecology that runs throughout these works takes center stage in these much-awaited lectures, pushed forward by what Isabelle Stengers calls the “intrusion of Gaia”—the catastrophic fits of an Earth whose tolerance has been exceeded.1

Human-caused climate change reawakens an apocalyptic sensibility, altering everything we do, think, and feel, whether we acknowledge it or not. “We have become the people who could have acted thirty or forty years ago – and did nothing, or far too little.”

more here.

italy and immigration

Immagine25-800x397Aaron Robertson at The Point:

When it was discovered last fall that one of Rome’s beloved sculptures, Gian Lorenzo Bernini’s Elephant and Obelisk, had been vandalized, the tip of the marble elephant’s left tusk snapped off, Igiaba Scego used the opportunity to diagnose what she understood as a peculiarly Roman sickness. Scego, writing for Internazionale, called Rome a “lonely and indolent” city where the stench of uncollected trash chokes every breath and aggression is diffuse. Perhaps these would be pardonable sins were the city more hospitable to the “other”—even a symbol like the Indian elephant—but something like the opposite seems to be the case.

This has been a year of great exposure for Scego, the Roman-born daughter of Somali immigrants who left their home after the 1969 coup d’état of Siad Barre, a former auxiliary soldier for the British and Italian empires. The English translation of her fourth novel, Adua, was released by New Vessel in June. She also published Lend Me Your Wings (Prestami le ali), an illustrated children’s book set in eighteenth-century Europe about a Jewish girl from the ghettos of Venice and a young African slave boy who help liberate a rhino from its cruel Dutch owner. A blending of the fabular and historical, filtered through the eyes of society’s castaways, is the trademark that has made Scego something like Italy’s most obvious answer to Toni Morrison.

more here.

Toulouse-Lautrec and Picasso

Picassolautrec2Adrian Tahourdin at the TLS:

Toulouse-Lautrec and Picasso never met. Picasso first went to Paris from Malaga via Barcelona with his friend Carles Casagemas, in October 1900. He visited the Exposition Universelle and saw his own work “Last Moments” (1899), a painting inspired by the death of his sister Conchita in 1895, exhibited in the Grand Palais. Casagemas would commit suicide in 1901, over a broken love affair – Picasso’s Blue period portrait of him (1901) clearly shows the bullet wound in his temple.

By the time the Spaniard arrived in Paris, Lautrec was already gravely ill, and had left the French capital (he died in September 1901 at the age of thirty-six at the family chateau of Malromé in the Gironde). Born in Albi into an aristocratic and slightly inbred family (his mother Adèle Tapié de Céleyran and father Comte Alphonse de Toulouse-Lautrec were first cousins), Henri was known at school as “le petit bonhomme”. His father was interested in horses and tried to encourage his son in his passion, without success. According to Henri Perruchot’s slightly novelistic (he uses dialogue) but very readable Vie de Toulouse-Lautrec, Comte Alphonse later developed a phobia of bridges and to avoid them would swim across rivers or, if the water was too cold, would walk over stepping stones. His son Henri stopped growing early on (he was said to be 1 metre 52 cm) and, partly as a consequence, at the age of thirteen broke his left femur in a fall; fifteen months later he broke his right femur. His forearms were foreshortened, while his fingers were enormous, as were his genitals.

more here.

What Lenin’s Critics Got Right

Mitchell Cohen in Dissent:

LeninThis year is the centenary of Russia’s revolutions, the one that overthrew Tsarism and the one that put the Bolsheviks in power. Next year will be the bicentenary of Marx’s birth. It’s a time when not thinking about the left’s history is impossible. These anniversaries arrive when there are positive rumblings on the left and very dangerous dins on the right. That makes it urgent that those who call ourselves “left”—an expansive term that, for me, signifies an amalgam of democratic, liberal, and egalitarian values—recollect that people who deployed language we still use have, at too many times, caused unmitigated disaster.

The Bolshevik takeover in Russia is a prime example. A number of myths derived from Bolshevism still lurk within parts of the left: “there really was no alternative to Leninism”; “if only Lenin had lived longer”; “if only Trotsky had won out”; “if only Bukharin . . . ” And, most important: “it is acceptable to suffocate democracy for the sake of socioeconomic equality.” I want to generate a little discomfort on the left but also some on the right by retrieving an airbrushed left. Airbrushing is usually associated with Stalinism and its attempts to eliminate its foes, both physically and from photos. My concern will be critics of Leninism together with Bolshevism’s mindset and its consequences for the left. One historian, Orlando Figes, notes that “tens of thousands were killed by the bombs and bullets of the revolutionaries, and at least an equal number by the repressions of the tsarist regime, before 1917 . . . ” Hundreds of thousands died in the “Red Terror,” he continues, with similar numbers perishing in the “White Terror” (factoring in anti-Jewish pogroms). In fact, the Bolshevik record between October 1917 and Lenin’s death in early 1924 would have satisfied any right-wing regime: virtually all left-wing parties and movements were crushed. That was before Stalin. Though later in the century, there were calls for “no enemies to the left,” Bolsheviks had not always seen things that way. Real alliances were a problem for them since alliances entail compromises.

No regime identifying with Bolshevism has led, at any time or place, to anything that can be called “liberation.”

More here.

Why Stem Cells Are Unfair to Their Children

Dan Garisto in Nautilus:

StemDarcie Moore is an expert on cargo. Not the kind you’d find on a freight train, though—it’s the cargo you’d find in stem cells, the kind that can transform into the different types of cells your body needs in your brain, skin, hair follicles, and lots of other places. They’re especially critical when we’re developing at an early age, but adults have them, too. As stem cells grow and replicate, they can accumulate misfolded proteins and other gunk—“cargo”—that harms their function by, in Moore’s words, “exhausting” them. Nautilus caught up with Moore, a neuroscientist at the University of Wisconsin-Madison, to talk about how she investigates stem cell exhaustion and how it contributes to the puzzle of aging.

What do stem cells have to do with aging?

The theory is that during aging, your stem cells begin to dysfunction. For example, you begin to lose the ability to make pigment in your hair with aging due to melanocyte stem cells becoming depleted. Some of the concepts that apply to stem cell aging have been initially studied quite a bit in yeast, including the work my lab does, and it’s only recently that this work has started to take off in mammalian systems.

Why study yeast aging?

Every time these unicellular organisms divide, they’re aging based on the number of cell divisions, and not necessarily on chronological age. Scientists researching yeast have found a barrier that limits the movement of proteins between the mother and their bud. We thought this might be really interesting to explore in neural stem cells of the brain, to see if replicative aging occurred there.

More here.

Thursday, December 7, 2017

The Impasse Between Modernism and Postmodernism

Patrick Lee Miller in Quillette:

WaveCrashesBuying textbooks, writing syllabi, and putting on armor. This is how many students and teachers prepared to return to campus this past fall. The last few years have witnessed an intensifying war for the soul of the university, with many minor skirmishes, and several pitched battles. The most dramatic was last spring at Evergreen State, shortly before the end of the spring semester.1 Perhaps the most dramatic since then have been at Reed College and Wilfrid Laurier University.2There is no shortage of examples, filling periodicals left and right. Wherever it next explodes, this war promises more ferocity, causing more casualties—careers, programs, ideals.

What’s at stake? According to Michael Aaron, writing after the battle at Evergreen, the campus war is symptomatic of a broader clash of three worldviews contesting the future of our culture: traditionalism, modernism, and postmodernism.3 The traditionalists, he writes, “do not like the direction in which modernity is headed, and so are looking to go back to an earlier time when they believe society was better.” Whether they oppose changes to sexual mores or American demographics, Aaron adds, “these folks include typical status-quo conservatives, Evangelical Christians as well as more nefarious types such as white nationalists and the ‘alt right’.” In his estimation, they are done.

He concedes that the election of Trump has empowered them, but he believes “they have largely been pushed to the fringes in terms of their social influence.” A few hours in front of FoxNews, or browsing the massive comment threads of some PragerU videos, would disabuse him of this illusion. Traditionalists are veryinfluential in the national culture of the U.S.A, if not other countries, and hopeful predictions of their retreat have all proven false. But Aaron is correct, in a way.

More here.

The Complicated Legacy Of A Panda Who Was Really Good At Sex

Maggie Koerth-Baker in FiveThirtyEight:

Panda-lead-03When he died from cancer on Dec. 28, 2016, the 31-year-old Pan Pan was the world’s panda paterfamilias: the oldest known living male and the panda (male or female) with the most genetic contribution to the species’ captive population. Today, there are 520 pandas living in research centers and zoos, mostly in China. Chinese officials say more than 130 of them are descendants of Pan Pan.

Pan Pan saved his species by being really, really, ridiculously good at sex. Before Pan Pan, experts thought that building up a stable population of captive pandas was going to require extensive use of artificial insemination. Pan Pan not only led the way on reproducing in captivity, he taught us that pandas were perfectly capable of doing it for themselves — and they’re now increasingly allowed to do so. Scientists say giant pandas represent, hands down, the most successful captive animal breeding program humans have ever embarked on, and, partly, we have Pan Pan to thank. He was a big, fluffy stud muffin, and he was beloved. “It sounds kind of weird,” Wille said of their first meeting in 2012. “Most people want to meet rock stars or movie stars. I wanted to meet Pan Pan. He was a legend.”

From the edge of extinction, Pan Pan (and pandas) emerged triumphant. And their success is also ours — proof that maybe humans really can clean up the ecological messes that we make. In September 2016, the International Union for Conservation of Nature declared pandas to no longer be endangered.

More here.

How Not to Fix the U.N. Human Rights Council

Ken Roth in Foreign Policy:

UnhumanrightscouncilThe Trump administration wants to reform the U.N. Human Rights Council. But it cites two concerns that would require conflicting strategies to address. It needs to sort out its priorities if it expects to make any progress.

On one hand, it wants to improve the council’s membership to strengthen its willingness to address the world’s most serious abuses. On the other hand, it wants to abolish the council’s longstanding special agenda item on the Israeli-occupied Palestinian territories. If the administration insists on elevating its defense of Israel above all else, it risks undermining an essential institution for the global defense of human rights — instead of strengthening it.

The 47-member Human Rights Council, based in Geneva, is the U.N.’s leading human rights body. As with any intergovernmental institution, its effectiveness depends on rallying the votes of its members. Sometimes it falls short: It has neglected, for instance, Egypt’s draconian four-year crackdown to crush all dissent and Venezuela’s decimation of its once-vibrant democracy. But often it succeeds. The most recent session of the council, in September, adopted important resolutions on, among other subjects, Myanmar’s ethnic cleaning of its Rohingya population, Syria’s targeting of hospitals and other civilian institutions, the Saudi-led coalition’s bombing and starving of Yemeni civilians, and South Sudanese fighters’ slaughter of civilians because of their ethnicity.

More here.

the transformation of work

Fiverr-adRhian E. Jones at Eurozine:

As apps and automation reconfigure work, what happens to how we think about ‘the working class’? The communities of manual, domestic and clerical workers called into being by industrial capitalism, who formed a majority of British society throughout the 20th century, now largely languish in areas defined by their lack of work, reliance on benefits, and subsequent demonisation in culture and politics. BBC research published in April 2013 as the Great British Class Survey, which divided UK society into seven layers, seemed to suggest that class was becoming defined on a cultural and social rather than occupational basis, in line with the nuanced framework proposed by Pierre Bourdieu in 1979. But while social and cultural expression and engagement may be less reliable as class markers, economic relationships remain fundamental in how people see themselves and others. Selina Todd’s The People: The Rise and Fall of the Working Class 1910-2010 (John Murray, 2014) concludes by emphasising that a majority of British people still identify as working class, and around half of respondents to the Great British Class Survey were characterised by their low levels of economic capital. On the lowest rung of the great British Class Survey’s taxonomy was the ‘precariat’, defined by its occupational and economic insecurity. This group was previously the subject of Guy Standing’s 2011 polemic The Precariat (Bloomsbury), in which he dubbed it a ‘new dangerous class’, whose plight could generate anger, violence and susceptibility to fascism unless addressed by social reforms geared towards financial security.

more here.