Top 10 books about being poor in America

Monica Potts in The Guardian:

For all its wealth and devotion to the myth of the American Dream, the US allows many more of its citizens to live in poverty than other wealthy countries do. We hold the 10th highest poverty rate in a 2021 ranking of OECD countries, with almost 38 million Americans living in poverty. Even that number is calculated using an outdated measure that probably doesn’t fully account for the number of families that are truly struggling. How is it that a country that prides itself on success and progress can allow so many of its citizens to go without? It’s an urgent question we’ve been writing about – failing to solve – for a long time.

The books about poverty that resonate most for me come in two forms. Novels and narrative nonfiction books often take a personal, sometimes painfully vivid and honest portrayal of a family or individual in a way that can personalise the potentially abstract issue for readers. My own book, The Forgotten Girls, follows in this tradition. To write it, I returned home to my small, poor town in the Arkansas Ozarks after many years of living in big cities on the east coast to find my childhood best friend: through our reconnection, I explored the ways that the different places we lived marked our adult lives.

More here.



A.I. Is Getting Better at Mind-Reading

Oliver Whang in The New York Times:

The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on vast amounts of writing to predict the next word in a sentence or phrase. In the process, the models create maps indicating how words relate to one another. A few years ago, Dr. Huth noticed that particular pieces of these maps — so-called context embeddings, which capture the semantic features, or meanings, of phrases — could be used to predict how the brain lights up in response to language. In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language models provide ways to decipher it.”

In their study, Dr. Huth and his colleagues effectively reversed the process, using another A.I. to translate the participant’s fMRI images into words and phrases. The researchers tested the decoder by having the participants listen to new recordings, then seeing how closely the translation matched the actual transcript.

More here.

Wednesday Poem

At Noon

At a mountain inn, high above the bulky green of chestnuts,
The three of us were sitting next to an Italian family
Under the tiered levels of pine forests.
Nearby a little girl pumped water from a well.
The air was huge with the voice of swallows.
Ooo, I heard a singing in me, ooo.
What a noon, no other like it will recur,
Now when I’m sitting next to her and her
While the stages of past life come together
And a jug of wine stands on a checkered tablecloth.
The granite rocks of that island were washed by the sea.
The three of us were one self-delighting thought
And the resinous scent of Corsican summer was with us.

by Czeslaw Milosz
from Unattainable Earth
Ecco Press, 1964

Tuesday, May 2, 2023

Kieran Setiya reviews “What’s the Use of Philosophy?” by Philip Kitcher

Kieran Setiya in the London Review of Books:

In August​ 1977, the New York Times ran a profile of the philosopher Saul Kripke, then 36 years old. Aged 17, he had proved a new result in modal logic – the logic of necessity and possibility – by building a mathematical model of ‘possible worlds’. He went on to transform philosophy, reviving dormant metaphysical questions. What makes us the particular people we are? Does science tell us how the world must be, not just how it is? At thirty, Kripke gave the lectures that made his name – they were published as Naming and Necessity in 1972 – and he went on to write a groundbreaking book about Wittgenstein’s later philosophy. Yet despite these achievements, the New York Times noted, he was little known outside his field. ‘Philosophy has become such an arcane discipline that it leaves most laymen gasping for meaning.’ It was also divided between two very different visions: ‘The technicians dream of one master key that could make a science of all philosophy, while the romantics dream of a “big move” that would make philosophy grab the world again and prove that philosophical intuition has not run dry.’

Philip Kitcher is with the romantics. He began his career working in relatively technical areas of the philosophy of mathematics and science, but soon turned towards contentious issues in biology – including creationism and evolutionary psychology – as well as the social nature of scientific practice.

More here.

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

Cade Metz in the New York Times:

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

More here.

How Pakistan can join the South Asia growth boom

Murtaza Hussain at Noahpinion:

This past week was a happy one for many Indians, who celebrated the milestone of becoming the world’s most populous country at a time when their economic and political fortunes are also on the rise. For their neighbors in Pakistan the news of late has been far less upbeat. The past year has been a disastrous one for Pakistanis, who have faced spiraling inflation, low economic growth, catastrophic floods, terrorist attacks, and blowback from a political standoff that has paralyzed the country’s elite. While a rising economic tide has lifted many boats in Asia this century, Pakistan remains stuck in a cycle of stagnation and crisis.

It is seldom appreciated in the West, but Pakistan is the fifth largest country in the world by population and its roughly 240 million citizens make it a larger country than Nigeria and Brazil.

More here.

Quantum computers will transform our world, curing cancer and fixing the climate crisis

David Shariatmadari in The Guardian:

Have you been feeling anxious about technology lately? If so, you’re in good company. The United Nations has urged all governments to implement a set of rules designed to rein in artificial intelligence. An open letter, signed by such luminaries as Yuval Noah Harari and Elon Musk, called for research into the most advanced AI to be paused and measures taken to ensure it remains “safe … trustworthy, and loyal”. These pangs followed the launch last year of ChatGPT, a chatbot that can write you an essay on Milton as easily as it can generate a recipe for everything you happen to have in your cupboard that evening.

But what if the computers used to develop AI were replaced by ones able to make calculations not millions, but trillions of times faster? What if tasks that might take thousands of years to perform on today’s devices could be completed in a matter of seconds? Well, that’s precisely the future that physicist Michio Kaku is predicting. He believes we are about to leave the digital age behind for a quantum era that will bring unimaginable scientific and societal change. Computers will no longer use transistors, but subatomic particles, to make calculations, unleashing incredible processing power. Another physicist has likened it to putting “a rocket engine in your car”. How are you feeling now?

More here.

How Do We Ensure an A.I. Future That Allows for Human Thriving?

David Marchese in The New York Times:

When OpenAI released its artificial-intelligence chatbot, ChatGPT, to the public at the end of last year, it unleashed a wave of excitement, fear, curiosity and debate that has only grown as rival competitors have accelerated their efforts and members of the public have tested out new A.I.-powered technology. Gary Marcus, an emeritus professor of psychology and neural science at New York University and an A.I. entrepreneur himself, has been one of the most prominent — and critical — voices in these discussions. More specifically, Marcus, a prolific author and writer of the Substack “The Road to A.I. We Can Trust,” as well as the host of a new podcast, “Humans vs. Machines,” has positioned himself as a gadfly to A.I. boosters. At a recent TED conference, he even called for the establishment of an international institution to help govern A.I.’s development and use. “I’m not one of these long-term riskers who think the entire planet is going to be taken over by robots,” says Marcus, who is 53. “But I am worried about what bad actors can do with these things, because there is no control over them. We’re not really grappling with what that means or what the scale could be.”

More here.

Technology Is Not Artificially Replacing Life — It Is Life

Sara Walker at Noema:

Human technologies are therefore not much different from other innovations produced in our planet’s 3.8-billion-year living history — with the exception that they are in our evolutionary future, not our past. Multicellular organisms evolved vision; what I will call “multisocietial aggregates” of humans evolved microscopes and telescopes, which are capable of seeing into the smallest and largest scales of our universe. Life seeing life. All of these innovations are based on trial and error and selection and evolution on past objects.

Intelligence is playing a larger role in modern technology, but that is to be expected — intelligence itself improves via evolution. It generates more complex systems — cells, multicellular aggregates like humans, societies, artificial intelligence and now multisocietial aggregates like international companies and groups that interact at the planetary scale. So-called “artificial intelligences” — large language models, computer vision, automated devices, robotics and more — are often discussed as disembodied and disengaged from any evolutionary context.

more here.

Tuesday Poem

Compost

A dying person or thing is still alive. That is the main thing about it; the dying is second. Perhaps even the dead are alive although that is another question, prone to projection. The dead are like us, and unlike. Analogies have problems.

Metaphors are violent, one part always subsumed. What are we left but litany? How are we to draw any line, any word? I am an unwilling surgeon and also the body on the table. The blood is my blood.

The blood is thick with plastics, cholesterol, false estrogen. I make teas from plants that do not manage to amend me. Still, I like the plants. I could compost my blood or tincture it to drink in solidarity with the garden.

Survival is a continuity with the dead. No contradiction. Even in the winter, the wrenching

cold holds seeds.

by Shea Boresi
from the Ecotheo Review

Bob Thompson’s Fraught Dance with the Old Masters

Jackson Arn at The New Yorker:

There is no exact word for what Thompson does with the Old Masters. His paintings—the subject of “Bob Thompson: Agony & Ecstasy,” the unmissable show at Rosenfeld, and another, “Bob Thompson: So Let Us All Be Citizens,” at 52 Walker—contain hundreds of motifs snatched from the Western canon, wedged into dense compositions, and coated in bright colors. The results are too calm for parody and too self-secure for homage. Stanley Crouch thought that Thompson, a jazz fanatic, improvised on European art the way a saxophonist improvises on standards, but even that seems a notch too reverent. He doesn’t riff on masterpieces so much as rifle through them, grabbing a handful of Goya or Tintoretto as though reaching for the cadmium yellow. For “The Entombment” (1960), his painting of a hatted man tumbling off his horse, he seems to have taken the lifeless, drooping torso from El Greco’s “The Entombment of Christ” (which El Greco lifted from Michelangelo, but that’s another story) and cast it in a drama of his own making, so that every brushstroke whooshes toward the bottom like a waterfall.

more here.

Sunday, April 30, 2023

Jennifer Egan and Terese Svoboda on Ghosts, Genre, and ‘Dog on Fire’

Jennifer Egan at The Millions:

Terese Svoboda has been a powerhouse of literary production in recent years, publishing five books since 2018, and partaking of a formidable array of genres and approaches. Dog on Fire is structured as an oppositional narrative duet between two bereaved women—the sister and lover of a young man who struggled with epilepsy—who remember and imagine his life and speculate about his unexplained death. The brief novel also touches on ghosts, aliens, and the possibility of foul play, all testament to Svoboda’s inventive eclecticism. Svoboda was kind enough to answer a few of my questions about Dog on Fire and her wide-ranging writing career.

More here.

The Computer Scientist Peering Inside AI’s Black Boxes

Allison Parshall in Quanta:

Machine learning models are incredibly powerful tools. They extract deeply hidden patterns in large data sets that our limited human brains can’t parse. These complex algorithms, then, need to be incomprehensible “black boxes,” because a model that we could crack open and understand would be useless. Right?

That’s all wrong, at least according to Cynthia Rudin, who studies interpretable machine learning at Duke University. She’s spent much of her career pushing for transparent but still accurate models to replace the black boxes favored by her field.

The stakes are high. These opaque models are becoming more common in situations where their decisions have real consequences, like the decision to biopsy a potential tumor, grant bail or approve a loan application. Today, at least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration. Nearly 400 of them are aimed at helping radiologists detect abnormalities in medical imaging, like malignant tumors or signs of a stroke.

More here.

The Best Spy Movie Ever Isn’t James Bond — It’s This

Matthew Mosley in Collider:

Has anyone exerted as much influence over the spy genre as Ian Fleming? Examples of espionage fiction may have predated his transition from naval officer to novelist by well over a century, but it wasn’t until the appearance of the British Secret Service’s finest asset, James Bond, that it became a cultural phenomenon (a feeling strengthened by the character’s legendary reinvention as one of cinema’s greatest icons from 1962’s Dr. No onwards). The success of the 007 franchise was a watershed moment for the genre, establishing a framework that everything released since has either deliberately aped or purposefully avoided. Seventy years on, the formula has lost none of its appeal… but it has contributed to the false impression of what being a spy is actually like. Of course, Ian Fleming knew exactly what he was doing when he put entertainment on a higher pedestal than realism, but it should be obvious that life in the Secret Service isn’t laden with shootouts and car chases. Being a spy is not glamorous – if anything, it’s rather mundane – but it also has the potential to be a lonely and disheartening profession where innumerable lives are lost for negligible results. It’s this feeling at the heart of the 1969 masterpiece, Army of Shadows.

More here.

What Socrates’ ‘know nothing’ wisdom can teach a polarized America

J. W. Traphagan and John J. Kaag in The Conversation:

A common complaint in America today is that politics and even society as a whole are broken. Critics point out endless lists of what should be fixed: the complexity of the tax code, or immigration reform, or the inefficiency of government.

But each dilemma usually comes down to polarized deadlock between two competing visions and everyone’s conviction that theirs is the right one. Perhaps this white-knuckled insistence on being right is the root cause of the societal fissure – why everything seems so irreparably wrong.

As religion and philosophy scholars, we would argue that our apparent national impasse points to a lack of “epistemic humility,” or intellectual humility – that is, an inability to acknowledge, empathize with and ultimately compromise with opinions and perspectives different from one’s own. In other words, Americans have stopped listening.

So why is intellectual humility in such scarce supply?

More here.