The Gourmet Guerilla: Adventures in Stealth Dining

Article_ng_01_imgTiffany Ng in Pluck Magazine:

What is it?

Half the time, my guests think I run ‘gorilla dining’ events and laugh at the ludicrousness of such a thought. Most times, I don’t have the heart to correct them more than once, lest the struggle to grasp this concept deter them from attending future events or, worse, from spreading the word. It did not make it easier that I decided upon introducing guerilla dining to Copenhagen and conjuring up a market out of thin air. Looking back on it, how naïve I was. Let’s just chalk that up to ingenuity, for now. Thanks to this leap of faith, I’ve had a stubborn, “I think I can, I think I can” choo-choo train approach this past year that’s led to countless adventures.

The concept of guerilla dining is without an official definition. Each orchestrator has his or her own understanding of what constitutes a true event. Many do not even use the term guerilla dining, opting rather for pop-up restaurant or supper club. In my view, the aim of each project is the same: we are looking to provide our guests with a completely different dining experience – one that will imprint itself permanently into your memory. How it is executed and what elements are brought into it to create the experience is what sets each apart. In this range are supper clubs consisting of only a few people and the host in a private home serving homemade food as well as professional exhibitions reaching upwards of fifty guests with hired chefs, musicians, and mobile kitchens. I like to think of mine as art installations that revolve around the central theme of food. Some have been intimate gatherings of fifteen guests, and others well over a hundred.

So how did I – a native San Franciscan transplanted in Copenhagen of all places — get involved in this sub-culture, you ask?

Jonathan Lethem Interviews Geoff Dyer

Geoff_Dyer_bodyIn Bomb Magazine:

J[onathan ]L[ethem] In the matter of coming up with a form or structure, I’m eager to talk with you about your new book on Tarkovsky’s Stalker. I’m actually reading it in tandem with revisiting the film—the first time I’ve seen it in 20 years. I’m halfway through both book and film as I write this. The short book that covers another artifact—a book, a film, an album—in scrupulous close description (with plenty of digressions, of course)—is something I’m trying myself. Last year with a film, John Carpenter’s They Live, and right at the moment I’m writing a short book on a Talking Heads album, Fear of Music. (I flatter myself I’m in “Dyerian Mode” when I do this.) If a novel is a mirror walking along a road (somebody said this; in a spirit of Dyerian laziness I’m refusing to Google it), a book like this is a mirror walking hand-in-hand with somebody else’s mirror. I’ll admit I also became fascinated by a weird concurrence in our film-subjects: both Stalker and They Live are films that switch between color and black-and-white (and therefore both get compared to The Wizard of Oz), and both turn on a transformation of the everyday world at roughly the half-hour mark, where the ordinary is revealed as extraordinary. Of course, your film quite respectably avoids wrestlers and ghouls.

G[eoff ]D[yer] Here is an important difference between us. You could do these books as sidelines or diversion, almost, I imagine, writing fiction in the morning and then doing the film or Talking Heads stuff in the afternoon. I operate at a far lower level of energy and inspiration, but a higher pitch of desperation! Generally, I like the idea of short books on one particular cultural artifact as long as they don’t conform to some kind of series idea or editorial template. The madder the better, in my view. I like the idea of an absurdly long book on one small thing. I think we’d agree that the choice of artifact is sort of irrelevant in terms of its cultural standing: all that matters is what it means to you, the author. I had so much fun doing the Stalker book I am tempted to do another, this time on Where Eagles Dare. In fact, I find myself thinking/whining, Why shouldn’t I do that? Plenty of other writers keep banging out versions of the same thing, book after book, why should I always have to be doing something completely new each time?

Saturday, October 22, 2011

When Kerouac Met Kesey

Sterling Lord in The American Scholar:

640px-Furthur_02-600x398If the 1950s and ’60s belonged to Jack Kerouac, then the ’60s and ’70s belonged to Ken Kesey. Both of them were my clients, and I liked and admired each of them. Although they differed in age, personality, and writing styles, they overlapped as writers of their times, and there was room for both. Each man was an iconoclastic thinker whose writing and philosophy inspired passionate devotion in his readers.

Before I ever met Kesey, Tom Guinzburg, president of Viking Press, called me one day in 1961 to ask whether Kerouac would write a blurb for One Flew Over the Cuckoo’s Nest, Kesey’s first novel. Tom had bought the book, but Viking had not yet published it. Publishers are always looking for well-known writers to offer positive comments for the book jacket or a press release. A blurb can be particularly helpful if readers feel there is a creative relationship between the two writers. I had no idea whether Kerouac would help, because I couldn’t remember his having blurbed before, but I didn’t think he would be offended if I asked. I thought he might even be flattered. So I told Tom to send me the manuscript. I read it before passing it on to Jack, and I knew right then that I wanted to work with Kesey. His novel was a bold, creative story of what happens in a mental institution—a very daring subject for his time. In the end, Jack did not write a blurb; he felt uncomfortable doing it, perhaps not wanting to get into that arena and all that went with it, and I respected that.

I called Guinzburg to tell him I’d like to represent Kesey, who didn’t have an agent, and then got in touch with Ken. He was delighted, and we started working together.

More here.

One Decade In Brooklyn…

Lahiri-web1Jhumpa Lahiri in The Brooklyn Rail:

In 2005 we bought a house in Fort Greene. I let go of the studio and acquired, for the first time in my life, a room to call my own, with a door to shut, and serving no other purpose. A single window, the only window of the house that faces south, looks out at the clock tower of the Williamsburgh Savings Bank. I have to stand up to see it. But it is there, a symbol and centerpiece of the borough, marking the hours of work I will not recall.

Boston is the city where I became a writer, but in Brooklyn I took on a far more daunting challenge, and that is to be a writer and a parent at the same time. Literary biographies and memoirs tell us that until recently, people tended to be one thing, not both. That the conflicting demands of each enterprise—one a self-centered, solitary vocation, the other inherently giving, in which the priorities of the self recede—could not coexist. But here in Brooklyn the exception seems to be the rule, because I am surrounded by, inspired by writers of all stripes, men and women alike, who are equally dedicated, though the equation is never a perfect one, to both the writing of books and the raising of children. You will find them attending birthday parties more often than book parties. You will find them, after a day of writing, not mixing a martini but preparing macaroni and cheese. You will find them rushing home from teaching writing classes at Princeton or Hunter College, in time to read to their children before bed. You will find them attending a friend’s reading with a newborn in a sling, being supportive to the friend, stepping onto the sidewalk when the baby needs comforting.

Something about Brooklyn accommodates both these callings, both drives. There are days when the prospect feels impossible, days when a school holiday means no writing gets done, or days when we choose to sit at our desks instead of accompany a field trip with our child’s class. There are months, even years, when our creative work may be put on hold.

In Brooklyn, versions of these choices are always being made, because examples of such writers are everywhere.

The Art of Money

The Moneychanger and his Wife2In More Intelligent Life:

Renaissance-era Florence is remembered not for its bankers but for its beauty. Yet the city is now hosting a splendid exhibition that reaffirms the important link between the two. High finance not only funded high art, but its money and movement helped to fuel the humanist ideals that inspired the Renaissance. This show, curated by Tim Parks, a British writer based in Italy, and Ludovica Sebregondi, an Italian art historian, considers the influence of 15th-century financiers on Italian art and culture.

“Money and Beauty” is divided into two parts: how money was made, and how it was spent. The gold florin, first minted in 1252 (and equal to $150 today), made the Florentine republic the heart of a nascent banking system that stretched from London to Constantinople. The Medici bank was supreme for almost a century, till its collapse in 1494 when the family was ousted from political power. This show, on view in the Strozzi palace (built in 1489 by a rival banking family), also traces the humbler fortunes of Francesco di Marco Datini, the “merchant of Prato”, using the vast archive he left behind. To recreate the daily activities of these bankers as well as their world view, the exhibition includes paintings and mercantile paraphernalia, from weighty ledgers to nautical maps.

The Church deemed it sinful to charge interest on loans, viewing it as profit without labour. This gave rise to artful and elaborate ways to disguise such profit-making, including foreign currency deals and triangular trading. The divergence of moral and commercial values can be seen in some Flemish paintings included here, such as Marinus van Reyerswaele’s “The Money Changer and his Wife”, in which a couple fixates on their coins while their candle is snuffed out (pictured top).

Making the iBio for Apple’s Genius

Janet Maslin in The New York Times:

BOOK-popup-v2After Steve Jobs anointed Walter Isaacson as his authorized biographer in 2009, he took Mr. Isaacson to see the Mountain View, Calif., house in which he had lived as a boy. He pointed out its “clean design” and “awesome little features.” He praised the developer, Joseph Eichler, who built more than 11,000 homes in California subdivisions, for making an affordable product on a mass-market scale. And he showed Mr. Isaacson the stockade fence built 50 years earlier by his father, Paul Jobs. “He loved doing things right,” Mr. Jobs said. “He even cared about the look of the parts you couldn’t see.”

Mr. Jobs, the brilliant and protean creator whose inventions so utterly transformed the allure of technology, turned those childhood lessons into an all-purpose theory of intelligent design. He gave Mr. Isaacson a chance to play by the same rules. His story calls for a book that is clear, elegant and concise enough to qualify as an iBio. Mr. Isaacson’s “Steve Jobs” does its solid best to hit that target.

More here.

How the Potato Changed the World

From Smithsonian:

Potatoes-International-Potato-Center-Peru-631When potato plants bloom, they send up five-lobed flowers that spangle fields like fat purple stars. By some accounts, Marie Antoinette liked the blossoms so much that she put them in her hair. Her husband, Louis XVI, put one in his buttonhole, inspiring a brief vogue in which the French aristocracy swanned around with potato plants on their clothes. The flowers were part of an attempt to persuade French farmers to plant and French diners to eat this strange new species. Today the potato is the fifth most important crop worldwide, after wheat, corn, rice and sugar cane. But in the 18th century the tuber was a startling novelty, frightening to some, bewildering to others—part of a global ecological convulsion set off by Christopher Columbus.

About 250 million years ago, the world consisted of a single giant landmass now known as Pangaea. Geological forces broke Pangaea apart, creating the continents and hemispheres familiar today. Over the eons, the separate corners of the earth developed wildly different suites of plants and animals. Columbus’ voyages reknit the seams of Pangaea, to borrow a phrase from Alfred W. Crosby, the historian who first described this process. In what Crosby called the Columbian Exchange, the world’s long-separate ecosystems abruptly collided and mixed in a biological bedlam that underlies much of the history we learn in school. The potato flower in Louis XVI’s buttonhole, a species that had crossed the Atlantic from Peru, was both an emblem of the Columbian Exchange and one of its most important aspects.

More here.

Should Occupy Wall Street Go Rawlsian?

9780674880146Steven Maize argues the case in the NYT's Opinionator:

To their credit, protestors have recently begun debating which specific demands the movement should make, but their conversations appear to be unguided by any deeper wisdom. A perfect intellectual touchstone would be the work of John Rawls, the American political philosopher who was one of the 20th century’s most influential theorists of equality. Rawls named his theory “justice as fairness,” and emphasized in his later writings that its premises are rooted in the history and aspirations of American constitutionalism. So it’s a home-grown theory that is ripe for the picking.

Despite providing a remarkable venue for what Al Gore called a “primal scream of democracy,” Occupy Wall Street is leveraged too heavily on the rhetoric of rage rather than reciprocity. Rawls would argue that Occupy is fully justified in its criticism of the political and economic structures that propagate massive concentrations of wealth; he saw the “basic structure” of society as the “primary subject of justice.” But Rawls would lament the tendency of the “99 percent” to misdirect their energies into hatred of individuals in the 1 percent. He would have them save their hostility for the policies and institutions that have permitted only the wealthiest to enjoy significant gains from the past two decades of economic growth.

Rawls’s boldest claim — that inequality in society is only justified if its least well-off members fare better than they would under any other scheme — could provide a lodestar for the protests. Rawls was no Marxist: this “difference principle” acknowledges that a productive, free society will be home to at least some degree of inequality. But the principle insists that if the rich get richer while wages and social capital of the poor and middle class are stagnant or falling, there is something seriously wrong.

Lionel Trilling & the critical imagination

Gertrude Himmelfarb in The New Criterion:

9780300152692Why Trilling Matters: it is a curiously defensive title for a book about a man who was a star in the much-acclaimed circle of “New York intellectuals,” who delivered the first of the Jefferson Lectures bestowed by the government for “distinguished intellectual and public achievement in the humanities,” and whose major collection of essays, The Liberal Imagination, has gone through half-a-dozen editions since it was first published in 1950 (most recently in 2008), totalling 70,000 copies in hard cover and more than 100,000 in paperback.1 Yet that defensive tone, unfortunately, is warranted. In spite of the availability of his work, Lionel Trilling today is almost unknown in academia, resurrected occasionally in an article or book, more often to be belittled or criticized than celebrated.

Adam Kirsch, seeking to restore Trilling to his rightful place in the literary and intellectual world, tells us that as an English major in the mid-1990s, he never read Trilling or even heard him discussed in class. It was only later that he came to the critic on his own and read him for “pleasure.” He then discovered that Trilling does indeed matter—and matters all the more because literature itself, he regretfully observes, seems to matter so little. In 1991, Dana Gioia, later the chairman of the National Endowment of the Arts, wrote an essay “Can Poetry Matter?” complaining that poetry no longer mattered, that, unlike fiction­, it had become the specialized calling of a small and isolated group. Five years later, the novelist Jonathan Franzen made the same complaint about fiction, deploring the neglect of novels in favor of movies and the web. In 2004, a survey by the nea found that the reading of any kind of literature is in dramatic decline, especially, and most ominously, among the young.

Trilling matters, then, Kirsch insists, because literature matters—and literature as Trilling understood it.

More here.

The Fierce Imagination of Haruki Murakami

Sam Anderson in the New York Times Magazine:

23murakami1_span-articleLargeI prepared for my first-ever trip to Japan, this summer, almost entirely by immersing myself in the work of Haruki Murakami. This turned out to be a horrible idea. Under the influence of Murakami, I arrived in Tokyo expecting Barcelona or Paris or Berlin — a cosmopolitan world capital whose straight-talking citizens were fluent not only in English but also in all the nooks and crannies of Western culture: jazz, theater, literature, sitcoms, film noir, opera, rock ’n’ roll. But this, as really anyone else in the world could have told you, is not what Japan is like at all. Japan — real, actual, visitable Japan — turned out to be intensely, inflexibly, unapologetically Japanese.

This lesson hit me, appropriately, underground. On my first morning in Tokyo, on the way to Murakami’s office, I descended into the subway with total confidence, wearing a freshly ironed shirt — and then immediately became terribly lost and could find no English speakers to help me, and eventually (having missed trains and bought lavishly expensive wrong tickets and gestured furiously at terrified commuters) I ended up surfacing somewhere in the middle of the city, already extremely late for my interview, and then proceeded to wander aimlessly, desperately, in every wrong direction at once (there are few street signs, it turns out, in Tokyo) until finally Murakami’s assistant Yuki had to come and find me, sitting on a bench in front of a honeycombed-glass pyramid that looked, in my time of despair, like the sinister temple of some death-cult of total efficiency.

More here.

Todd Gitlin on Why OWS is Different From All Other Social Movements

Matt Bieber in The Wheat and the Chaff:

ToddGitlinTodd Gitlin is a professor of journalism and sociology and chair of the Ph. D. program in Communications at Columbia University. In 1963-64, Gitlin served as the third president of Students for a Democratic Society. Later, he helped organize the first national demonstration against the Vietnam War and the first American demonstrations against corporate aid to the apartheid regime in South Africa.

MATT: During a panel last week at Harvard’s Kennedy School, you suggested that there’s a key difference between the Occupy Movement and other social movements. While most social movements begin with sparse public support, the Occupy Movement begins with potentially widespread support for its goal of reducing wealth inequality. Say more about this distinction and what it might mean for the Occupy Movement.

TODD: I hadn’t realized this until I checked off the movements of my recollection, that they had started as minority uprisings – at least expressions of dissidence – in comparison to the population as a whole. So the Civil Rights Movement, which obviously was popular with black people but not with Americans overall, certainly not in the South, when it broke out. The anti-Vietnam War movement represented a small minority, maybe a little more than 10%, when it erupted. The women’s movement, it’s hard to say – possible exception there. The gay movement was certainly not a popular movement over all. I see this more as the rule than the exception. Perhaps the major exceptions in American history were the Populist and labor movements against the robber barons in the late 19th century. But of course there were no polls, so nobody knows.

More here.

Friday, October 21, 2011

Should a Scientific Meeting Attempt to Address Questions of Faith?

Mungerstained_HSDave Munger in Seed:

Scientists were asking three big questions about the Faith and Science panel at the World Science Festival last month. Should the panel be funded by the Templeton Foundation, which some accuse of harboring a pro-religion agenda? Should the panel include a “New Atheist” like Richard Dawkins or Daniel Dennett? And should a festival devoted to “science” discuss matters of faith at all?

The last question might be the easiest to answer. While many scientists believe that science and faith are completely separate, others argue that science shows that faith and religion are unnecessary. Ironically, if this latter argument is true, then it follows that a session on faith and science is essential for proper understanding of science. As Razib Khan, a blogger for Discover magazine, observed last year, over 50 percent of scientists believe in God or some higher power. And as medical writer Tom Rees noted, the phenomenon isn’t going away: younger scientists are more likely to hold religious beliefs than older scientists. While the finding could suggest that religious people are more likely to leave science as they get older, it could also mean that religious beliefs are growing among scientists. If the New Atheists are right and science really does invalidate religion, then it’s essential that these increasingly religious scientists discuss the issue at scientific meetings. If the New Atheists are wrong, then scientists should still be discussing the issue to address this apparent deficiency in the atheists’ scientific reasoning.

Science Controversies Past and Present

FigureSteven Sherwood in Physics Today:

In the decades before Galileo began his fervent promotion of Copernicanism, the Catholic Church took an admirably philosophical view of the idea. As late as 1615, Cardinal Robert Bellarmine acknowledged that “we should . . . rather admit that we did not understand [Scripture] than declare an opinion to be false which is proved to be true.” But the very next year he officially declared Copernicanism to be false, stating that there was no evidence to support it, despite Galileo’s observations and Kepler’s calculations. Institutional imperatives had forced a full rejection of Copernicanism, which had become threatening precisely because of the mounting evidence.

Even Albert Einstein was not immune to political backlash. His theory of general relativity…undermined our most fundamental notions of absolute space and time, a revolution that Max Planck avowed “can only be compared with that brought about by the introduction of the Copernican world system.” Though the theory predicted the anomalous perihelion shift of Mercury’s orbit, it was still regarded as provisional in the years following its publication in 1916.

When observation, by Arthur Eddington and others, of a rare solar eclipse in 1919 confirmed the bending of light, it was widely hailed and turned Einstein into a celebrity. Elated, he was finally satisfied that his theory was verified. But the following year he wrote to his mathematician collaborator Marcel Grossmann:

This world is a strange madhouse. Currently, every coachman and every waiter is debating whether relativity theory is correct. Belief in this matter depends on political party affiliation.

Instead of quelling the debate, the confirmation of the theory and acclaim for its author had sparked an organized opposition dedicated to discrediting both theory and author. Part of the backlash came from a minority of scientists who apparently either felt sidelined or could not understand the theory. The driving force was probably professional jealousy,6but scientific opposition was greatly amplified by the anti-Semitism of the interwar period and was exploited by political and culture warriors. The same forces, together with status quo economic interests, have amplified the views of climate contrarians.

The historical backlashes shed some light on a paradox of the current climate debate: As evidence continues to accumulate confirming longstanding warming predictions and showing how sensitive climate has been throughout Earth’s history, why does climate skepticism seem to be growing rather than shrinking? All three provocative ideas—heliocentricity, relativity, and greenhouse warming—have been, in Kuhn’s words, “destructive of an entire fabric of thought,” and have shattered notions that make us feel safe. That kind of change can turn people away from reason and toward emotion, especially when the ideas are pressed on them with great force.

Vibrant Matter, Zero Landscape: Klaus K. Loenhart interviews Jane Bennett

GAM07_234wOver at eurozine:

Klaus K. Loenhart: For the Zero Landscape edition of GAM, landscape and the environment, often perceived as the seemingly passive background to our cultural endeavours, are elevated to the status of a protagonist. In your own work “on the seemingly passive”, how did you arrive at your position of a political ecology of things and matter?

Jane Bennett: Prior to reading GAM's call for papers, I had not focused on the sensibility-shaping powers of the category “landscape”. But of course “landscape” (like “environment”) has presented the world as naturally divided into active bodies (life) and passive contexts (matter). I think many people now find this picture implausible. For us, landscape is better understood as an “assemblage” or working set of vibrant materialities. In Vibrant Matter[1] I inflected Deleuze and Guattari's notion of assemblage in this way: “Assemblages are living, throbbing confederations that are able to function despite the persistent presence of energies that confound them from within. They have uneven topographies, because some of the points at which the various affects and bodies cross paths are more heavily trafficked than others… Each member of the assemblage has a certain vital force, but there is also an effectivity proper to the grouping as such: an agency of the assemblage”. Clearly, a landscape possesses an efficacy of its own, a liveliness intermeshed with human agency. Clearly, the scape of the land is more than a geo-physical surface upon which events play out. Clearly, a particular configuration of plants, buildings, mounds, winds, rocks, moods does not operate simply as a tableau for actions whose impetus comes from elsewhere.

You ask how it came to pass that it now seems to me wrong (not morally wrong but perceptually imprecise) to speak as if materiality or landscape were mere matter. No one knows exactly how one comes to believe and perceive as one does, but I'll give it a try, speaking first of a biographical factor, and then naming some literary-philosophical influences.

Drawing the Holocaust

Spiegelman_prisoners_day_jpg_470x514_q85The NYRB blog has two excerpts from a book of conversations between Hillary Chute and Art Spiegelman about his Maus books, the first of which came out 25 years ago. The first on “Why Mice?” can be found here. From the second:

Hillary Chute: You mentioned your parents had some books about the war around the house when you were growing up. How did they inform your thinking?

Art Spiegelman: Well, there were the small-press books I told you about, and one called The Black Book, a cataloguing of the atrocities- and one paperback on the same hidden shelf of forbidden knowledge that was about Aleister Crowley and Satanism called The Beast 666. Anyway all of it kind of sat together as a kind of semi-pornography for me. In fact, I think House of Dolls, a sleazily unhealthy fiction/memoir by a survivor that was a widely read paperback book in the fifties about the whorehouses of Auschwitz, might have been on that shelf too. Many years later I read Shivitti, a memoir by the House of Dolls author, Ka-Tzetnik, about his LSD therapy and revisiting Auschwitz on acid, and trying to come to terms with an incestuous relationship with his sister who died in the camps-what an astounding character! Anyway I read part of House of Dolls as pornography, which, I guess, is the way most people read it: as part of the whole leather-bondage sexy-Nazi pathology. As a kid, the connection between the pornographic aspect of the death camps—the forbidden, the dangerous and fraught—was all one big stew that I couldn’t separate out.

To say those books informed my thinking, or even to say I was thinking about this at all in my early teens, would give me too much credit. It was all just part of The Big Taboo. It occurs to me right now, though, that perhaps the whole taboo-smashing ethos of the underground comix scene did allow me to stir up the buried connections to the unspeakable that my mother’s secret bookshelf opened up.

Karachi is violent, unhealthy, and unequal. Is that so bad?

Steve Inskeep in Foreign Policy:

ScreenHunter_04 Oct. 21 20.34Karachi is the economic heart of Pakistan, its main port and financial capital, and an industrial center for everything from textiles to steel. Home to about 400,000 people upon Pakistan's independence in 1947, the city has since expanded to more than 13 million souls by the most conservative estimate, having taken in migrants from every corner of Pakistan and beyond.

The city has grown so swiftly that it evades all efforts to control it. Millions live in illegal neighborhoods, where developers seize and subdivide government land, bribing police not to notice as they sell tiny homes to the poor. Many residents get electricity by tapping power lines, adding stress to a grid that's already overwhelmed, with hours of blackouts every day. Karachi's Lyari River, which used to be a seasonal stream, now flows year-round with untreated sewage. Ultimately, the waste reaches fishing grounds in the Arabian Sea. “Thirty years ago you could drop a coin in the water and see it below the surface,” a Karachi fisherman told me this month. “Now the sea is like a gutter.” Local politics encourage even harsher metaphors. The city faces a political crisis so severe that it has gone more than a year and a half without an elected mayor or city council.

All this makes Karachi an especially vivid place to test some theories about the world's growing cities.

More here.

Steve Jobs Regretted Wasting Time on Alternative Medicine

From Gawker:

ScreenHunter_03 Oct. 21 20.26Everyone else wanted Steve Jobs to move quickly against his tumor. His friends wanted him to get an operation. His wife wanted him to get an operation. But the Apple CEO, so used to swimming against the tide of popular opinion, insisted on trying alternative therapies for nine crucial months. Before he died, Jobs resolved to let the world know he deeply regretted the critical decision, biographer Walter Isaacson has told 60 Minutes.

“We talked about this a lot,” Isaacson told 60 Minutes of Jobs's decision to treat a neuroendocrine tumor in his pancreas with an alternative diet rather than medically recommended surgery. “He wanted to talk about it, how he regretted it….I think he felt he should have been operated on sooner… He said, 'I didn't want my body to be opened…I didn't want to be violated in that way.'”

The account lends credence to a Harvard cancer researcher we quoted in a controversial post last week.

More here.

Why your cat’s eyes have slit pupils rather than round ones

Yfke van Bergen in The Times of London:

198609_10150121811064425_513199424_6424588_2117423_nThe trouble is that single-focus lenses such as those in humans suffer from chromatic aberration. This means that different wavelengths of light are focused at different distances from the lens and, as a result, some colours are blurred.

In the latest issue of the Journal of Experimental Biology , the researchers reveal that many animals solve this problem by using multifocal lenses.

These are composed of different refractive zones in concentric rings, with each zone tuned to a different wavelength.

Almost all animals with multifocal lenses have slit pupils, which help them to make the most of their unique lens, according to the paper. This is because, even when contracted, a slit pupil lets an animal use the full diameter of the lens, spanning all the concentric refractive zones, allowing for all colours to be sharply focused.

More here.