Steven Pinker in Nautilus:
On November 10, 2023, my dear friend John Tooby died—or as he would have put it, finally lost his struggle with entropy. John was a Distinguished Professor of Anthropology at the University of California, Santa Barbara, who together with his wife, Leda Cosmides, founded the field of evolutionary psychology. But that academic accomplishment doesn’t do him justice; it’s the institutional embodiment of the way his mind worked. John had insight into human nature worthy of our greatest novelists and playwrights, grounded in an understanding of the natural world worthy of our greatest scientists. Evolution for him was a link in an explanatory chain that connected human thought and feeling to the laws of the natural world.
It was this depth of thinking that made John’s company so precious. His conversations would mix sly observations of people’s foibles with profound allusions to science, history, and culture. Conference audiences forgave him for his famously discursive presentations, in which he might use up his time with a digression on the Big Bang before he ever got to the data.
Belying the canard that evolutionary psychology is a bunch of post hoc just-so stories, John, together with Leda and their students, published many experimental findings that confirmed nonobvious predictions about a wide range of psychological phenomena. These included statistical thinking, the perception of race, the development of sibling feelings, and the emotion of anger.
More here.

Ever since his seminal first recordings as a leader with his Hot Five and Seven ensembles in the 1920s, jazz musicians have called Louis Armstrong “
Take intensifiers like ‘totally’, ‘pretty’ and ‘completely’. We might consciously believe them to be exaggerations undermining the speaker’s point, yet people consistently report seeing linguistic booster-users as more authoritative and likeable than others.
Consider a few of the bolder claims made by experts. Two years ago, Blaise Agüera y Arcas, vice president of Google Research, had already declared the end of the animal kingdom’s monopoly on language on the strength of Google’s experiments with large language models. LLMs, he argued, “illustrate for the first time the way that language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals.” In a similar vein, the Stanford University computer scientist Christopher Manning has argued that if “meaning” constitutes “understanding of the network of connections between linguistic form and other things,” be they “objects in the world or other linguistic forms,” then “there can be no doubt” that LLMs can “learn meanings.” Again, the point is that humans have company. The philosopher Tobias Rees (among many others) has gone further, arguing that LLMs constitute a “far-reaching, epoch-making philosophical event” on par with the shift from the premodern conception of language as a divine gift to the modern notion of language as a distinctly human trait, even our defining one. On Rees’s telling, engineers at OpenAI, Google, and Facebook have become the new Descartes and Locke, “[rendering] untenable the idea that only humans have language” and thereby undermining the modern paradigm those philosophers inaugurated. LLMs, for Rees at least, signal modernity’s end.
To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets. And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
The world’s fossil fuel producers are planning expansions that would blow the planet’s carbon budget twice over, a
I am what I want, and I have the power within myself to make myself what I want to be, if only I find the will to activate this inner potential—or rather, to manifest this authentic identity. Such is the thesis under review in Tara Isabella Burton’s new book,
One stifling hot night in early August, I dreamed, as I always do when I have a fever, the old, familiar dream: the earth opens up before my feet, a gaping pit appears, and into this pit I fall, then clamber straight back out, as eager as a cartoon character, only to fall into the next pit that suddenly yawns before me. An endless obstacle course engineered by some higher power, an experiment going nowhere, the opposite of a story. This dream has followed me since childhood and is probably as old as the realization that I will, one day, end up in a pit forever. As a piece of drama, it is extremely simple, and yet it’s an effective dream and no more unoriginal than that of my friend Sibylle, who told me over breakfast a few days later that she has regular nightmares of being swept away by a vast, tsunami-like wave.
Science journalism is really about everything, I like telling my science-journalism students, because science is really about everything. Take The Golden Bowl, the insanely prolix novel by Henry James, which I read as penance after zipping through a Stephen King gore-fest. The Golden Bowl, I’d heard, is a slog compared to thrillers like The Portrait of a Lady and Turn of the Screw, but James called Bowl his “solidest” novel. By that he must have meant the most Jamesian because The Golden Bowl reads like a parody of James. H.G. Wells’ comparison of James to a hippopotamus trying to pick up a pea comes to mind.
In April,
In a few weeks it will be 30 years since
By now, we’ve seen the ChatGPT parlor tricks. We’re past the novelty of a cake recipe in the style of Walt Whitman or a weather report by painter Bob Ross. For the one-hundredth time, we understand the current incarnation of large language models make mistakes. We’ve done our best to strike a studied balance between doomers and evangelists. And, we’ve become less skeptical of “emergent” flashes of insight from the aptly-named foundational models. At the same time, Google, Meta and a list of hopeful giant swatters have released credible competitors to ChatGPT.
I MAY HAVE BEEN INVITED
A
IN THE LATE