It’s Possible to Live in More than One Time, More than One History of the World

27._aztec_calendar_2800x1232

John Crowley in Lapham's Quarterly:

“Then what is time?” St. Augustine asked himself in his Confessions. “I know what it is if no one asks; but if anyone does, then I cannot explain it.”

Augustine saw the present as a vanishing knife edge between the past, which exists no longer, and the future, which doesn’t yet. All that exists is the present; but if the present is always present and never becomes the past, it’s not time, but eternity. Augustine’s view is what the metaphysicians call “presentism,” which holds that a comprehensive description of what exists (an ontology) can and should include only what exists right now. But among the things that do exist now are surely such things as the memory of former present moments and what existed in them, and the archives and old calendars that denote or describe them. Like the dropped mitten in the Ukrainian tale that is able to accommodate animals of all sizes seeking refuge in it from the cold, the ever-vanishing present is weirdly capacious—“There’s always room for one more!”

Time is continuous, but calendars are repetitive. They end by beginning again, adding units to ongoing time just by turning in place, like a stationary bicycle. Most calendars these days are largely empty, a frame for our personal events and commitments to be entered in; but historically calendars have existed in order to control time’s passage with recurring feasts, memorials, sacred duties, public duties, and sacred duties done publicly—what the church I grew up in calls holy days of obligation. Such a calendar can model in miniature the whole of time, its first day commemorating the first day of Creation, its red-letter days the great moments of world time coming up in the same order they occurred in history, the last date the last day, when all of time begins again. The recent fascination with the Mayan “long count” calendar reflects this: the world cycle was to end when the calendar did.

It’s possible to live in more than one time, more than one history of the world, without feeling a pressing need to reconcile them. Many people live in a sacred time—what the religious historian Mircea Eliade called “a primordial mythical time made present”—and a secular time, “secular” from the Latin saeculum, an age or a generation. Sacred time, “indefinitely recoverable, indefinitely repeatable,” according to Eliade, “neither changes nor is exhausted.” In secular time, on the other hand, each year, month, second, is a unique and unrepeatable unit that disappears even as it appears in the infinitesimal present.

More here.

Thursday Poem

An Arundel Tomb

Side by side, their faces blurred,
The earl and countess lie in stone,
Their proper habits vaguely shown
As jointed armour, stiffened pleat,
And that faint hint of the absurd –
The little dogs under their feet.

Such plainness of the pre-baroque
Hardly involves the eye, until
It meets his left-hand gauntlet, still
Clasped empty in the other; and
One sees, with a sharp tender shock,
His hand withdrawn, holding her hand.

They would not think to lie so long.
Such faithfulness in effigy
Was just a detail friends would see:
A sculptor's sweet commissioned grace
Thrown off in helping to prolong
The Latin names around the base.

They would not guess how early in
Their supine stationary voyage
The air would change to soundless damage,
Turn the old tenantry away;
How soon succeeding eyes begin
To look, not read. Rigidly they

Persisted, linked, through lengths and breadths
Of time. Snow fell, undated. Light
Each summer thronged the grass. A bright
Litter of birdcalls strewed the same
Bone-littered ground. And up the paths
The endless altered people came,

Washing at their identity.
Now, helpless in the hollow of
An unarmorial age, a trough
Of smoke in slow suspended skeins
Above their scrap of history,
Only an attitude remains:

Time has transfigures them into
Untruth. The stone fidelity
They hardly meant has come to be
Their final blazon, and to prove
Our almost-instinct almost true:
What will survive of us is love.

by Philip Larkin
from The Whitsun Weddings, 0964

.

.

Do We Live in the Matrix?

Matrix-door

Zeeya Merali in Discover:

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.

The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.

But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

More here.

The Rise of Data and the Death of Politics

US-president-Barack-Obama-011

Evgeny Morozov in The Observer (Photograph: Mandel Ngan/AFP/Getty Images):

In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there's primary data – a list of what's in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

More here.

Wednesday, October 22, 2014

Behind the Mask: The Life of Vita Sackville-West

Rachel Trethewey in The Independent:

Portrait_of_Vita_SackvilleWestIn the famous image of Vita Sackville-West, Lady with a Red Hat, the writer is the embodiment of the confident young aristocrat. Exuding a languid elegance, her heavy-lidded Sackville eyes gaze out from beneath the broad brim. But this portrait captures another element of Vita’s persona. It was painted in 1918, shortly after her sexual awakening with Violet Keppel, and beneath the flamboyant clothes and bright lipstick there is an androgynous quality. In Behind the Mask, the first biography of Vita for 30 years, Matthew Dennison focuses on this ambiguity, exploring the duality which was rooted in her genetic inheritance and her eccentric upbringing.

Vita’s identity embraced masculine and feminine elements; her stiff-upper-lip English ancestry was in conflict with the Latin blood from her grandmother Pepita, a Spanish dancer who was the mistress of Lionel, Baron Sackville. Among their illegitimate offspring was Vita’s mother Victoria, who by marrying her cousin became the mistress of the Sackvilles’ ancestral home, Knole in Kent. The author of acclaimed biographies of Queen Victoria and her daughter Princess Beatrice, Dennison is particularly good at analysing complex mother-daughter relationships. Here, he sees Victoria’s identity interwoven with Vita’s. The former was a capricious character, he explains: “The fairy godmother was also a witch.” She claimed she could not bear to look at Vita because she was so ugly; the cruelty in Vita’s treatment of her lovers was learnt from her mother. An only child, Vita was often left at Knole with nannies and governesses while her parents travelled abroad. The house became like a person to her; built like a medieval village, it fired her imagination. Tragically for Vita, because she was a female she could not inherit the house. Dennison sees her fiction as addressing this; in her fantasy life, she celebrated a heroic male version of herself.

More here.

Are we free? Neuroscience gives the wrong answer

Daniel C. Dennett in Prospect:

ScreenHunter_850 Oct. 22 14.33For several millennia, people have worried about whether or not they have free will. What exactly worries them? No single answer suffices. For centuries the driving issue was about God’s supposed omniscience. If God knew what we were going to do before we did it, in what sense were we free to do otherwise? Weren’t we just acting out our parts in a Divine Script? Were any of our so-called decisions real decisions? Even before belief in an omniscient God began to wane, science took over the threatening role. Democritus, the ancient Greek philosopher and proto-scientist, postulated that the world, including us, was made of tiny entities—atoms—and imagined that unless atoms sometimes, unpredictably and for no reason, interrupted their trajectories with a random swerve, we would be trapped in causal chains that reached back for eternity, robbing us of our power to initiate actions on our own.

Lucretius adopted this idea, and expressed it with such dazzling power in his Stoic masterpiece, De Rerum Natura, that ever since the rediscovery of that poem in the 15th century, it has structured the thinking of philosophers and scientists alike. This breathtaking anticipation of quantum mechanics and its sub-atomic particles jumping—independently of all prior causation—from one state to another, has been seen by many to clarify the problem and enunciate its solution in one fell swoop: to have free will is to be the beneficiary of “quantum indeterminism” somewhere deep in our brains. But others have seen that an agent with what amounts to an utterly unpredictable roulette wheel in the driver’s seat hardly qualifies as an agent who is responsible for the actions chosen. Does free will require indeterminism or not? Many philosophers are sure they know the answer (I among them), but it must be acknowledged that nothing approaching consensus has yet been reached.

More here.

This Gorgeous Sculpture Creates Instant Architecture in an Empty Room

171014_EYE_1.jpg.CROP.original-original

Kristin Hohenadel in Slate:

Held annually since 2009 in Grand Rapids, Michigan, ArtPrize is a democratic art competition open to anyone in the world over age 18, with generous cash prizes awarded by both a jury of experts and popular vote. For the first time, a single work—Intersections by Pakistan-born Anila Quayyum Agha—took this year’s public and juried grand prizes for a total of $300,000.

Agha’s stunning piece is an obvious crowd-pleaser, a 6½-foot square laser-cut, black lacquer wood cube suspended from the ceiling and lit with a single light bulb that casts breathtaking 32-feet-by-34-feet shadows to create instant architecture in an otherwise empty room.

The artist, who is now an associate professor of drawing at the Herron School of Art and Design in Indianapolis, explains on her website that the work is based on the geometrical patterns used in Islamic sacred spaces.

It was created to express what she describes as “the seminal experience of exclusion as a woman from a space of community and creativity such as a Mosque and translates the complex expressions of both wonder and exclusion that have been my experience while growing up in Pakistan.”

More here.

The White Racial Slur We’ve All Been Waiting For

Michael Mark Cohen in Medium:

1-S1BIqGkz4lRqk8t35w8uXAI am a white, middle class male professor at a big, public university, and every year I get up in front of a hundred and fifty to two hundred undergraduates in a class on the history of race in America and I ask them to shout white racial slurs at me.

The results are usually disappointing.

First of all, everyone knows that saying anything overtly racist in front of strangers is totally taboo. So the inhibitions to participation in this insane activity are already pretty great. Even so, most of these kids are not new to conversations about race; the majority of them are students of color, including loads of junior college transfers, student parents, vets, and a smattering of white kids, mostly freshmen. Of course some are just scared of speaking in front of so many people, no matter what the topic.

So I cajole a few of them into “Cracker” and “Red Neck.” We can usually get to “Hillbilly” or “Trailer Trash” or “White Trash,” possibly even “Peckerwood,” before folks recognize the “Cletus the slack-jawed yokel” pattern of class discrimination here. And being that we are at a top ranked west coast university, not only do we all share basic middle class aspirations, but we can feel pretty safe in the fact that there are no “Red Necks” here to insult.

More here.

‘Hidden brain signatures’ of consciousness in vegetative state patients discovered

From KurzweilAI:

Chennubrains-512x324Scientists in Cambridge, England have found hidden signatures in the brains of people in a vegetative state that point to networks that could support consciousness — even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate. Although unable to move and respond, some patients in a vegetative state are able to carry out tasks such as imagining playing a game of tennis, the scientists note. Using a functional magnetic resonance imaging (fMRI) scanner, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain that deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and graph theory to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The researchers showed that the connectome — the rich and diversely connected networks that support awareness in the healthy brain — are typically impaired in patients in a vegetative state. But they also found that some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

More here.

Wednesday Poem

Bridge Builder

Bridge-builder I am
between the holy and the damned
between the bitter and the sweet
between chaff and the wheat

Bridge-builder I am
between the goat and the lamb
between the sermon and the sin
between the princess and Rumpelstiltskin

Bridge-builder I am
between the yoni and the lingam
between the darkness and the light
between the left hand and the right

Bridge-builder I am
between the storm and the calm
between the nightmare and the sleeper
between the cradle and the reaper

Bridge-builder I am
between the hex and the hexagram
between the chalice and the cauldron
between the gospel and the Gorgon

Bridge-builder I am
between the serpent and the wand
between the hunter and the hare
between the curse and the prayer

Bridge-builder I am
between the hanger and the hanged
between the water and the wine
between the pearls and the swine

Bridge-builder I am
between the beast and the human
for who can stop the dance
of eternal balance?

by John Agard
from Poetry Archive

Speaker 4

Listen (recommended)
.

Super-Intelligent Humans Are Coming

4520_45e81409831b77407fbc22afc09f0d78

Stephen Hsu in Nautilus Magazine (Photo by Cinerama/Courtesy of Getty Images):

The possibility of super-intelligence follows directly from the genetic basis of intelligence. Characteristics like height and cognitive ability are controlled by thousands of genes, each of small effect. A rough lower bound on the number of common genetic variants affecting each trait can be deduced from the positive or negative effect on the trait (measured in inches of height or IQ points) of already discovered gene variants, called alleles.

The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals.

If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen.

This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.

Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points.

Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.

More here.

Fame and Literature, Irreconcilable Enemies

Bolano-243x291

John Yargo in the LA Review of Books:

Bolaño’s biographers face a unique problem. The seductive popular image of him — something like a better-read Burroughs — is at odds with the voice of his fiction and his essays, which tends to be more generous, expansive, and penetrating than his image suggests. Even key events, like his arrest in Pinochet’s Chile or his “heroin addiction,” have been alternately credited as formative aspects of his personality, and discredited by his surviving family, friends, and rivals as erroneous planks of a legacy campaign.

What stands out in his fiction are the riotous voices, the contradictory and implausible characters, the restless equivocations and recapitulations: the polyphony. The first full-length biography in English, Bolaño: A Biography in Conversations, sidesteps “the authoritative biography” trap and attempts to recreate Bolaño-esque polyphony in telling the author’s own story. As the editor-in-chief of the Mexican edition of Playboy, Maristain conducted the last interviews, which appear with other conversations published between 1999 and 2005 in a handy collection, Roberto Bolaño: The Last Interview. In those interviews, Bolaño clearly relishes talking about books and contradicting himself and his image. If the interviews are not confiding in the usual sense of personal disclosures, to his credit, he is far more intimate and vulnerable when answering a question about Cervantes than when other authors are sharing sensitive details about their families.

As in the essay collection Between Parentheses, the picture that emerges from the interviews and the biography is a Bolaño that draws from different sources than contemporary Anglo-American literary fiction incubated in the university workshop. In place of Hemingway, Borges and Nicanor Parra; Carver is substituted by Breton; Denis Johnson usurped by Jacques Vaché and Witold Gombrowicz.

In Latin American fiction, he had a similar effect, shifting the terms on which authors would be understood.

More here.

Category Mistakes

Sleep-Furiously

Richard Marshall interviews Ofra Magidor in 3:AM Magazine:

3:AM: You say it’s important for linguistics, computer science – how so?

OM: In the case of linguistics, it is fairly obvious why category mistakes are important: one of the central tasks of linguistics explaining why some sentences are fine and others are infelicitous. In fact, category mistakes are a particularly interesting case, because a plausible argument can be made for explaining their oddness in terms of each of syntax, semantics, and pragmatics – so this is a good phenomenon to explore for anyone who is interested in the distinction between these three realms of language. This is probably why in the late 1960s category mistakes played a key role in one of the central disputes in the foundations of linguistics – that between interpretative semanticists (who claimed that syntax is autonomous of semantics) and generative semanticists (who rejected the sharp divide between these two realms).

I should also note there was a period in the 1960s when there was quite a lot of discussion of category mistakes happening in a parallel in linguistics and in philosophy, but there was practically no interaction at all between the two fields on this topic (they even used different terms – in linguistics authors usually refer to category mistakes as ‘selectional violations’). One thing I tried to do in the book was to bring together these two parallel debates. I’d like to think that these days there is much more co-operation between linguists and philosophers of language so this kind of divide is less likely to happen.

Moving to computer science: one straightforward way in which category mistakes are relevant is because of the field of computational linguistics. Suppose for example that you have an automatic translator which is given the sentence ‘John hit the ball’. If the translator looks up the word ‘ball’ in a dictionary, it will encounter (at least) two meanings: a spherical object that is used in games, and a formal gathering for dancing. It is obvious that the most natural interpretation of the sentence used the former meaning, and one way to see that is to note that if ‘ball’ were interpreted in the ‘dance’ sense, the sentence would be a category mistake. So being able to recognize category mistakes can help the automatic translator reach the correct interpretation.

But there is also a more general way in which the topic is relevant to computer science: computer programs use variables of various types which are assigned values – and it is very common to encounter cases where the value is of the wrong type for the variable. So there is an issue about how the program is going to deal with this kind of type mismatch which is in some ways parallel to the question of how natural languages deal with category mistakes.

More here.

Tuesday, October 21, 2014

Diary: ebola

1409307203767_wps_14_BN2XBP_Colorized_transmisPaul Farmer at The London Review of Books:

The worst is yet to come, especially when we take into account the social and economic impact of the epidemic, which has so far hit only a small number of patients (by contrast, the combined death toll of Aids, tuberculosis and malaria, the ‘big three’ infectious pathogens, was six million a year as recently as 2000). Trade and commerce in West Africa have already been gravely affected. And Ebola has reached the heart of the Liberian government, which is led by the first woman to win a presidential election in an African democracy. There were rumours that President Ellen Johnson Sirleaf was not attending the UN meeting because she was busy dealing with the crisis, or because she faced political instability at home. But we knew that one of her staff had fallen ill with Ebola. A few days ago, we heard that another of our Liberian hosts, a senior health official, had placed herself in 21-day quarantine. Although she is without symptoms, her chief aide died of Ebola on 25 September. Such developments, along with the rapid pace and often spectacular features of the illness, have led to a level of fear and stigma which seems even greater than that normally caused by pandemic disease.

But the fact is that weak health systems, not unprecedented virulence or a previously unknown mode of transmission, are to blame for Ebola’s rapid spread. Weak health systems are also to blame for the high case-fatality rates in the current pandemic, which is caused by the Zaire strain of the virus.

more here.

when H. G. Wells interviewed stalin

ImgresA 1935 piece by Malcolm Cowley at The New Republic:

I doubt that any other interview of the last ten years was more dramatic, more interesting as a clear statement of two positions or, in a sense, more absurdly grotesque than H.G. Wells’s interview with Stalin.

They met in Moscow on July 23 of last year and talked through an interpreter for nearly three hours. Wells gives a one-sided story in the last chapter of his “Experiment in Autobiography.” The official text of the interview can now be had in a pamphlet issued by International Publishers for two cents. A longer pamphlet, costing fifty cents in this country, was published in London byThe New Statesman and Nation. It contains both the interview and an exchange of letters in which Bernard Shaw is keener and wittier than Wells or J.M. Keynes. There is, unfortunately, no letter from Stalin. We know what Wells thinks about him; it would be instructive to hear what Stalin thinks about Wells.

The drama of their meeting lay in the contrast between two systems of thought. Stalin, with full authority, was speaking for communism, for the living heritage of Marx and Engels and Lenin. Wells is not an official figure and was speaking for himself; but he spoke with the voice of Anglo-American liberalism.

more here.

Cubism at the Metropolitan Museum

141027_r25660-320-419Peter Schjeldahl at The New Yorker:

The show eases, somewhat, the famous difficulty of telling a Picasso from a Braque in the woodshedding period of 1909-12, which is termed Analytic Cubism. A wall text—a welcome one among far too many that are prolix, making for an installation that is like a walk-through textbook—points out Braque’s tendency toward ruddy luminosity and Picasso’s toward dramatic shadow. Still, the works speak a single visual language of clustered forms that advance and recede in bumps and hollows, with shaded planes, often bodiless contours, and stuttering fragments of representation. It’s said that they rendered objects from different viewpoints simultaneously, but seeing the works that way is beyond me. You don’t take in an Analytic Cubist picture as a whole. Rather, you survey it, as with an aerial view of some terrain that you must then explore on foot.

Oddly, for a style that crowds the picture plane, spatial illusion is crucial to Cubism. You know that you’re on the right track when, to your eye, the “little cube” elements start to pop in and out, as if in low relief. There’s a vicarious tactility to the experience. What the elements represent matters far less than where they are, relative to one another. To see how this works, it helps to take note of an endemic formal problem of Cubist painting: what to do in the corners, where the third dimension can’t be sustained.

more here.

a lucid, thrilling and amusing history of the digital age

Peter Conrad in The Guardian:

PeterRevolutions usually leave ancient institutions tottering, societies shaken, the streets awash with blood. But what Walter Isaacson calls the “digital revolution” has kept its promise to liberate mankind. Enrichment for the few has been balanced by empowerment for the rest of us, and we can all – as the enraptured Isaacson says – enjoy a “sublime user experience” when we turn on our computers. Wikipedia gives us access to a global mind; on social media we can chat with friends we may never meet and who might not actually exist; blogs “democratise public discourse” by giving a voice to those who were once condemned to mute anonymity. Has heaven really come down to our wired-up, interconnected Earth?

What Isaacson sees as an eruption of communal creativity began with two boldly irreligious experiments: an attempt to manufacture life scientifically, followed by a scheme for a machine that could think. After Mary Shelley’s Frankenstein stitched together his monster, Byron’s bluestocking daughter Ada Lovelace devised an “analytical engine” that could numerically replicate the “changes of mutual relationship” that occurred in God’s creation. Unlike Shelley’s mad scientist, Lovelace stopped short of challenging the official creator: her apparatus had “no pretension to originate anything”. A century later, political necessity quashed this pious dread. The computing pioneers of the 1930s, as Isaacson points out, served military objectives. At MIT, Vannevar Bush’s differential analyser churned out artillery firing tables, and at Bletchley Park, after the war began, an all-electronic computer called the Colossus deciphered German codes. Later, the US air force and navy gobbled up all available microchips, which were used for guiding warheads aimed at targets in Russia or Cuba; only when the price of the chips dropped could they be used to power consumer products, not just weapons.

More here.

Genetic Variant May Shield Latinas From Breast Cancer

Anahad O'Connor in The New York Times:

Well_women-tmagArticleA genetic variant that is particularly common in some Hispanic women with indigenous American ancestry appears to drastically lower the risk of breast cancer, a new study found. About one in five Latinas in the United States carry one copy of the variant, and roughly 1 percent carry two.

…Many genome-wide association studies have looked for associations with breast cancer in women of European descent. But this was the first such study to include large numbers of Latinas, who in this case hailed mostly from California, Colombia and Mexico, said the lead author of the study, Laura Fejerman of the Institute for Human Genetics in San Francisco. The researchers zeroed in on chromosome 6 and discovered the protective variant, which is known as a single nucleotide polymorphism, or SNP (pronounced (“snip”). They also discovered that its frequency tracked with indigenous ancestry. It occurred with about 15 percent frequency in Mexico, 10 percent in Colombia and 5 percent in Puerto Rico. But its frequency was below 1 percent in whites and blacks, and other studies have shown that it occurs in about 2 percent of Chinese people. “My expectation would be that if you go to a highly indigenous region in Latin America, the frequency of the variant would be between 15 and 20 percent,” Dr. Fejerman said. “But in places with very low indigenous concentration — places with high European ancestry — you might not even see it.”

More here.

How Your Cat Is Making You Crazy

Jaroslav Flegr is no kook. And yet, for years, he suspected his mind had been taken over by parasites that had invaded his brain. So the prolific biologist took his science-fiction hunch into the lab. What he’s now discovering will startle you. Could tiny organisms carried by house cats be creeping into our brains, causing everything from car wrecks to schizophrenia?

Kathleen McAuliffe in The Atlantic:

ScreenHunter_848 Oct. 21 14.11Certainly Flegr’s thinking is jarringly unconventional. Starting in the early 1990s, he began to suspect that a single-celled parasite in the protozoan family was subtly manipulating his personality, causing him to behave in strange, often self-destructive ways. And if it was messing with his mind, he reasoned, it was probably doing the same to others.

The parasite, which is excreted by cats in their feces, is called Toxoplasma gondii (T. gondii or Toxo for short) and is the microbe that causes toxoplasmosis—the reason pregnant women are told to avoid cats’ litter boxes. Since the 1920s, doctors have recognized that a woman who becomes infected during pregnancy can transmit the disease to the fetus, in some cases resulting in severe brain damage or death. T. gondii is also a major threat to people with weakened immunity: in the early days of the AIDS epidemic, before good antiretroviral drugs were developed, it was to blame for the dementia that afflicted many patients at the disease’s end stage. Healthy children and adults, however, usually experience nothing worse than brief flu-like symptoms before quickly fighting off the protozoan, which thereafter lies dormant inside brain cells—or at least that’s the standard medical wisdom.

But if Flegr is right, the “latent” parasite may be quietly tweaking the connections between our neurons, changing our response to frightening situations, our trust in others, how outgoing we are, and even our preference for certain scents. And that’s not all. He also believes that the organism contributes to car crashes, suicides, and mental disorders such as schizophrenia. When you add up all the different ways it can harm us, says Flegr, “Toxoplasma might even kill as many people as malaria, or at least a million people a year.”

More here.