What is Thought that a Large Language Model Ought to Exhibit (But Won’t)?

by David J. Lobina

Not looking good.

Artificial General Intelligence, however this concept is to be defined exactly, is upon us, say two prominent AI experts. Not exactly an original statement, as this sort of claim has come up multiple times in the last year or so, often followed by various qualifications and the inevitable dismissals (Gary Marcus has already pointed out that this last iteration involves not a little post-shifting, and it doesn’t really stand up to scrutiny, anyway).

I’m very sceptical too, for the simple reason that modern Machine/Deep Learning models are huge correlation machines and that’s not the sort of process that underlies whatever we might want to call an intelligent system. It is certainly not the way we know humans “think”, and the point carries yet more force when it comes to Language Models, those guess-next-token-based-on-statistical-distribution-of-huge-amounts-of-data systems.[1]

This is not to say that a clear definition of intelligence is in place, but we are on firmer ground when discussing what sort of abilities and mental representations are involved when a person has a thought or engages in some thinking. I would argue, in fact, that the account some philosophers and cognitive scientists have put together over the last 40 or so years on this very question ought to be regarded as the yardstick against which any artificial system needs to be evaluated if we are to make sense of all these claims regarding the sapience of computers calculating huge numbers of correlations. That’s what I’ll do in this post, and in the following I shall show how most AI models out there happen to be pretty hopeless in this regard (there is a preview in the photo above). Read more »



What does it mean to have a ‘right to life’?

by Oliver Waters

If you were a medieval peasant in the year 1323 AD, would you have believed that slavery was morally permissible?

The odds are that you would have. After all, most people at the time saw slavery as a permanent fact of life, not an abomination that ought to be abolished. But it’s very tempting to assume that you, as a rational, thoughtful individual, could have transcended your historical setting to grasp its transcendent wrongness.

To do so however, you would have needed to reject the mainstream beliefs of your society. You would have had to think through the issue via first principles. This would include developing a coherent theory that accounted for human moral equality – a tall order, given the bulk of humanity didn’t manage this feat for another few hundred years.

It can be fun to pass judgment on the silliness of past generations, but the real work of moral philosophy is figuring out which ideas we take for granted today that future generations will look back on with the same contempt as we do for slavery.

After all, as Mark Twain warned:

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

With that in mind, here’s a moral claim that’s obviously true, according to most people alive today:

It is always morally wrong to kill an innocent human being.

When we look at this claim more closely however, from first principles, it appears to be not only false, but a dogma responsible for a tremendous amount of unnecessary suffering. Read more »

Poetry in Translation

Learning and Love

by Mohammad Iqbal (1887-1935)

“Love is madness,” Learning said.
“Learning is suspicion and doubt,” Love said.

O Learning, do not a bookworm be, you are veiled
Love is radiant, steadfast, a pageant of life and death

Learning displays the divine essence logically; love illogically
“Question everything,” says Learning. “I am the answer,” says Love

Love is a king as well as an ascetic, dweller, and a dwelling, enslaves
Even royalty, champions life with certainty, throws open the gate of love

Laws of love forbid rest, allow tumult of storms, the joy of reaching a shore;
Forbid love’s harvest after all. Learning is Son of the Book; Love, the Mother.

***

Translated From the original Urdu by Rafiq Kathwari

The Other Orwell, the Cold War, the CIA, MI6, and the Origin of Animal Farm: A Conversation between John Reed and Andrea Scrima 

by Andrea Scrima

Twenty years ago, John Reed made an unexpected discovery: “If Orwell esoterica wasn’t my foremost interest, I eventually realized that, in part, it was my calling.” In the aftermath of September 11, 2001, ideas that had been germinating suddenly coalesced, and in three weeks’ time Reed penned a parody of George Orwell’s Animal Farm. The memorable pig Snowball would return from exile, bringing capitalism with him—thus updating the Cold War allegory by fifty-some years and pulling the rug out from underneath it. At the time, Reed couldn’t have anticipated the great wave of vitriol and legal challenges headed his way—or the series of skewed public debates with the likes of Christopher Hitchens. Apparently, the world wasn’t ready for a take-down of its patron saint, or a sober look at Orwell’s (and Hitchens’s) strategic turn to the right.

Snowball’s Chance, it turns out, was only the beginning. The book was published the same year as Hitchens’s Why Orwell Matters, and the media frequently paired the two. In the years that followed, Reed wrote a series of essays (published in The Paris Review, Harper’s, The Believer, and other journals) analyzing the heated response to the book and everything it implied. Orwell’s writing had long been used as a propaganda tool, and evidence had emerged that his political leanings went far beyond defaming communism—but if facing this basic historical truth was so unthinkable, what was the taboo preventing us from seeing? Reed’s examination of our Orwell preoccupation sifts through the changes the West has undergone since the Cold War: its cultural crises, its military disasters, its self-deceptions and confusions, and more recently—perhaps even more troubling—its new instability of identity. The Never End brings together nine of these essays and adds an Animal Farm timeline, a footnoted version of Orwell’s proposed preface, and the Russian text Animal Farm originally drew from to more clearly assess the circumstances behind, and the conclusions to be drawn from, the book’s global importance. Read more »

The Many Faces of Dementia

by Carol A Westbrook

Dementia refers to progressive, irreversible cognitive impairment usually seen in the elderly. The clinical findings of dementia almost always include some degree of memory impairment. We didn’t know much about how memories were formed in the brain until 1953, when the now-famous patient named Henry Molaison, HM, had removal of an area in the temporal lobe of his brain called the “hippocampus”  the operations successfully prevented seizures, but unfortunately HM also lost the ability to form new memories of events, and his recollection of anything that happened in the preceding eleven years was severely impaired. Other types of memories such as learning physical skills were not affected. This was the first step in learning about how and where memories are formed in the human brain.

We now know that the hippocampus plays an important part in the formation of new memories by the physical interaction and modification of neurons, and it also processes short  -term memories into long-term memories, which are then stored in the frontal cortex.  Specific brain structures have other specific tasks in memory development, (see figure 2) such as the amygdala, the area of the brain which adds emotional pertinence to memories such as fear, pleasure or pain, whereas physical skills and movement are dependent on the cerebellum. We are beginning to understand how and why specific brain lesions can lead to different forms of dementia. Read more »

Monday, October 16, 2023

Memories of Martti

by S. Abbas Raza

“Martti Ahtisaari, ex-Finland president and Nobel peace laureate, dies aged 86” runs the headline of his obituary in The Guardian today. But he was much more than that. First of all, he was the father of one of my closest friends, Marko Ahtisaari, who was with me in graduate school in the philosophy department at Columbia in the 1990s and was instrumental in my starting 3 Quarks Daily almost 20 years ago.

I first met Martti in the immediate aftermath of the September 11, 2001 attacks in NYC. I had written a couple of paragraphs about what it felt like to be a New Yorker on the day after 9/11, and must have sent what I’d written to Marko who forwarded it to Martti, who then asked if he could read what I had written at some important emergency political meeting in Europe (I forget what it was, exactly), and then he did. A couple of weeks later he came to New York to (among other things) meet with Kofi Annan who was Secretary-General of the United Nations at the time. Martti invited me to have breakfast with him, which I did with my sister Sughra, and he turned out to be an exceptionally lovely and modest man.

I met Martti on many occasions after that and was always deeply impressed by his cheerful optimism. He was always full of ideas on how to solve political as well as other problems and was obviously a man of great intelligence but what I loved about him most was his (often self-deprecating) sense of humor, such as the time he told me about the message of congratulations he received when he was first elected President of Finland from a Pakistani friend who had been his roommate in Karachi at the beginning of Martti’s diplomatic career. After congratulating him, the message went on to say, “You weren’t that smart. How did you do it?” This was a typical Martti story. By the way, by a complete coincidence, the man who sent him that message is now buried in a grave next to my mother’s grave in a cemetery in Karachi.

Let me quickly relate one last memory of him which made a lasting impression on me: My wife, Margit, and I were once staying with Martti and his wife, Eeva, in their apartment in Helsinki and the four of us had dinner together. Afterwards, Margit got up and started picking up the dirty plates but she was immediately stopped by Eeva who said, “No, please sit down, that has always been Martti’s job.” And then this wonderful, funny, powerful man, who had recently stepped down as President of Finland and who was still deeply involved in international peace efforts that would eventually win him a Nobel Peace Prize, picked up and washed all our dishes.

NOTE: Several of the editors of 3QD knew Martti Ahtisaari and we all wish to extend our deepest sympathies to Eeva and Marko: Your grief is also our grief.

Other Obituaries: New York Times, Washington Post, Bloomberg, Official Memorial Page

The Good and The Popular

by Martin Butler

I was listening recently to some teenagers on the radio talking about how they saw their future lives and was struck by how many expressed the desire to be internet ‘influencers’. Why did I feel distaste? Was it my age, my generation? My problem is not with the internet itself but with the very expression ‘influencer’, and the fact that there was no reference at all to the nature of the influencing. That was almost an afterthought, as if the key to being a successful influencer amounted to mere popularity, chalking up the followers. Presumably though, there are good and bad influencers, and I don’t mean good here in the sense of being able to influence lots of people, but good in the sense of having a positive rather than a negative influence.

It’s the age-old problem of the relationship between the good and the popular. Plato saw the popular as the enemy of the good, but then he is at one end of the scale, famously arguing that democracy was bad because it confused the good with the popular. Societies, he believed, need good government while democracy merely delivers popular government, which is quite a different thing. (Plato uses his simile of the ship to describe democracy, which gave rise to Sebastian Brant’s 15th allegory of the Ship of Fools.) Similarly, with regard to the arts, the unashamed elitist might argue that good art is by its very nature difficult, requiring education, intellect, and effort. Popularity requires less. In line with Plato, Mill argued that there are two qualitatively distinct pleasures, the lower and the higher, the lower pandering to popularity, the higher more difficult to access. According to this way of thinking, the artist, writer or musician who follows high artistic ideals better not give up the day job, and it’s folly to expect the general paying public to appreciate such ideals even if the work produced is of the highest calibre. Rembrandt died in poverty, Van Gogh only sold one picture in his lifetime, and Moby Dick was a flop and out of print for many years. The list goes on and on.

At the other end of the spectrum are those who deny that there is any intrinsic distinction between the good and the less so, and that the only way to make a meaningful distinction is simply to count the ‘likes’, so to speak. Everything is simply a matter of opinion, so if we want to identify something as good, popularity is the only ‘objective’ means by which we can do it. As in the commercial world, ‘the customer is always right’, and the popular is the good. It is mere snobbery to pretend otherwise, a snobbery I could be accused of with my distaste for the aspiration to be an influencer. For according to this view there is only one kind of good influencer, and that is a successful one.

Both these extremes are unsatisfactory. Surely there can be some kind of relationship between the good and the popular? Read more »

Wigner’s Many Friends: Quantum Mechanics And Reality

by Jochen Szangolies

Theatrical release poster for Kurosawa’s classic Rashōmon. We’ll eventually get to why it’s here.

Whenever I use words like ‘reality’, ‘truth’, or ‘existence’, I feel an almost irresistible urge to mark my vague sense of unease by liberal application of scare quotes. After all, what could such words even mean? They seem to denote concepts too vast and simultaneously slippery to be pinned down by a simple denotative term. Like trying to point at everything all at once, such terms seem to cast so wide a net that they fail to single out any one thing in particular.

There is, I think, a good reason for this unease, and modern science is beginning to reveal the contours of it—and with that, some of its own limitations. In the previous column, I have argued that the typical starting point for science, conventionally understood, is the existence of an independent world that we can approximate ever more closely in our knowledge of it. There is a subject-object distinction baked into it that delineates this process neatly into questions of epistemology, of what we can know and how we can know it, and ontology, of what there is. According to this story, a subject with the right epistemological tools can uncover the objective ontology of the world through patient and painstaking labor, perhaps never fully getting there, but coming arbitrarily close. Such, at least, seems to be the hope.

That prior column ended with a discussion of the Kochen-Specker theorem, a famous result in the foundations of quantum mechanics that essentially entails that, when it comes to the (allegedly) microscopic realm subject to the laws of quantum physics, the above clear delineation is not possible in general. What we find must, to some extent, depend on how we look—the ontological inventory of the world is not independent from the epistemic process of interrogating it. Values of observable quantities, if they exist at all, must be contextual, that is, depend on what other values are queried simultaneously.

For the macroscopic world, this strikes us as an absurdity: the color of a ball, say, should not depend on whether it is simultaneously measured together with its size, or its weight! And in our everyday experience, where in principle all quantities can be observed simultaneously, no such effects occur. So perhaps this is just the quantum world being, you know, weird? Maybe for all practical purposes, we can still rely on the world being a solid bedrock of facts awaiting our discovery, like pill bugs hiding under so many rocks to be turned over? Read more »

When I Worked for Fox News

by Barbara Fischkin

I once wrote a political column for Fox News. My point of view was liberal and at times decidedly leftist.

This is true-true and not fake news.

The notorious Fox was then a media baby, albeit an enormous one. At its American launch in 1997, it already had 17 million cable subscribers. Millions of Americans looking for a conservative alternative to CNN and company.

Two years later I was hired, as a freelancer, to write an opinion column for a nascent website: Fox News Online. Back then, the television screen ruled. The website was an experiment, to see if the Internet was real. I was told I could opine as I wished, as long as the facts backed me up and I was not libelous or incoherent. A cartoonist was assigned to illustrate my words.

When I was first approached about writing this, I thought it was a practical joke. A dear friend and former newspaper colleague showed up one morning in our family backyard and told me to stop calling her every morning with my take on national and world events. “Write it,” she said. “I will pay you. Two hundred bucks a column once a week. Eight hundred a month.”  Not a lot for Fox News, even then. But I needed the money. Needing money is one of my hobbies. Read more »

Another Look at Sam Bankman-Fried on Shakespeare

by Joseph Shieber

Oh, he’s wrong. So are many of the people coming at him.

One of the most-cited passages of Michael Lewis’s new Sam Bankman-Fried book, Going Infinite, is a quote from a blog post from 2012. In that post, a twenty year old Bankman-Fried expresses contempt for the consensus view that Shakespeare is a literary genius.

Bankman-Fried offers two arguments against the consensus view.

The first argument involves an argument on the literary merits: that Shakespeare’s plots rely “on simultaneously one-dimensional and unrealistic characters, illogical plots, and obvious endings.”

The second argument – and the one that raced across the internet after the release of Going Infinite – is a probabilistic one. Given the much smaller population of English speakers in the 16th century, and given the much lower prevalence of education, what are the chances that the best writer in the history of the English language would have been born then? As Bankman-Fried put it:

… the Bayesian priors are pretty damning.  About half of the people born since 1600 have been born in the past 100 years, but it gets much worse than that.  When Shakespeare wrote almost all of Europeans were busy farming, and very few people attended university; few people were even literate–probably as low as about ten million people.  By contrast there are now upwards of a billion literate people in the Western sphere.  What are the odds that the greatest writer would have been born in 1564?  The Bayesian priors aren’t very favorable.

The problem with this, says the 20 year old Bankman-Fried, is that “We like old plays and old movies and old wines and old instruments and old laws and old people and old records and old music.  We like them because they’re old and come with stories but we convince ourselves that there’s more: we convince ourselves that they really were better.  … We don’t just respect the old; we think that the old is right and that those who prefer the new to the old are wrong.” Read more »

Law Versus Justice IV

by Barry Goldman

In my last piece I mentioned that the lawyers working on the FTX bankruptcy were billing at $2,165 an hour ($595 for paralegals). Since then we learned:

A legal team that forced Tesla’s directors to agree in July to return more than $700 million in compensation to the automaker for allegedly overpaying themselves… want[s] a judge to approve $229 million in fees, or $10,690 an hour, according to a Sept. 8 filing in Delaware’s Court of Chancery.

One of the points of my earlier piece was that the money to pay the lawyers in a civil case comes out of the deal. Lawyers, I said, eat pie. Admittedly, the FTX and Tesla cases are extreme examples, but the fact remains that our system of civil justice is an expensive one. And it is expensive not just in dollars but the way biologists use the term. It takes a great deal of time and effort, talent and resources simply to feed the apparatus. This raises the question of how such an expensive arrangement could have evolved. Wouldn’t society naturally gravitate toward a more efficient system? There are two concepts from social science that I think can help explain: agency capture and the Iron Law of Oligarchy.

Capture works like this. Suppose there is an opening on your local sewer commission. Are you going to apply? Of course not! I assume you are a responsible citizen, and you support efforts to  see that public funds are expended in a careful and prudent way. No doubt. But there is no way you are going to sit on the sewer commission.

So who will volunteer to be on the sewer commission? Or the Road Commission or the Board of Manicurists and Nail Technicians? Or any of the dozens of other regulatory bodies in every political jurisdiction? The question answers itself. No one seeks those positions out of civic-mindedness. The only reason anyone serves on any of those bodies is that they have an interest in the actions they take. No one else cares enough to bother. As a result, regulatory bodies become controlled by the entities they were designed to regulate.

This is agency capture. Read more »

All The Justice We Can Afford

by Mike O’Brien

If this article seems less lucid, or artful, or otherwise good in the way that some of my columns are good, you must excuse my failings and instead direct your disappointment towards the ingenuity of modern immunology. I am still, as far as I know, untouched by Covid-19; however, in anticipation of an inevitable despoilment of my precious bodily fluids, I have received a sixth vaccination and can confidently, emphatically say that it is not a placebo. I am heartened by the argument that my scalp-to-toes suffering is a sign that I possess a robust and responsive immune system. Good for me. I am less heartened by the argument that if the vaccine’s viral simulacrum throws me into a sack and beats me with bricks, the real thing will visit even worse horrors upon me. I try not to think about it too much. I wear my mask and get my shots and hope that the virus doesn’t mutate into something worse.

As I discussed in my previous article, I was summoned for jury duty selection in September. It was a breathtakingly botched affair (especially the part where dozens of fellow potential jurors were crammed into an unventilated conference room in the basement of Montreal’s imposing brutalist monument of a courthouse, and this just as Covid rates were spiking upwards again). About an hour into the day, the few hundred citizens compelled to appear in a cavernous courtroom were informed by the judge of four important facts: first, that they would be forbidden from working during the trial; second, that they would receive a stipend amounting to less than the provincial minimum wage; third, that the trial was expected to last a considerable length of time; and fourth, that it was to start the following morning. Read more »

Elude the Force of Gravity at all Costs: Italo Calvino’s Philosophy of Lightness

by Ada Bronowski

Lightness comes in three F’s: finesse, flippancy and fantasy. The French are famous for the first. See how the delicate, sweet singer songwriter Alain Souchon transforms the heavyweight aphorism of André Malraux – the real-life French Indiana Jones who ended his career as minister for culture – from the desperate heroism of ‘I learnt that a life is worth nothing, but nothing is more valuable than life’ into the ethereal, refined song that even if you do not understand the words, you cannot help but feel the breezy weightless of: ‘La vie ne vaut rien, rien, … rien ne vaut la vie’ (life is worth nothing, nothing, …nothing is worth life’, here). The song is finesse incarnate if that already is not too much of an oxymoron since finesse is anything but carnal. The rich assonances of the refrain where ‘rien’ (nothing) rimes with ‘tiens’ (I hold) and echoes with ‘seins’ (breasts) lift the contradiction, making the breasts abstract and life concrete. The second, flippancy, is the sand British humour was built on from Bertie Wooster to Black Adder (was, alas, as David Stubbs shows in his recent Different Times: A History of British Comedy, that flippant British humour is now dead after Boris Johnson chose to do politics rather than comedy). It is the lightness that transforms despair into melancholy, the weight of the world on one’s shoulders into elegance and panache.

The third F is at the root of all lightness. Fantasy, etymologically, comes from the Greek word for light, phos – a word which already in antiquity contains the double-usage echoed in our English ‘lightness’: the shining light and what is detached from the heaviness of anchors.  Fantasy is the realm of the light in all its senses. It is what the light shows us: surfaces, glimmers and shimmers which we can never quite be sure are true, nor can we positively dismiss as false. It is the realm of an in-between that we cannot not wish that it could be real.

Surely if we can see it, it must be real? Read more »

What’s a Predicate and Who Cares, Anyway?

by Rebecca Baumgartner

Photo by Tony Tran on Unsplash

I was looking at a grammar worksheet my fourth-grader recently brought home, and the instructions said to “Underline the predicate of each sentence.” I paused for a moment. What exactly is a predicate, again? Is it a fancy way of saying verb phrase? Or direct object? Or…what, exactly?

You might think I felt embarrassed to not know this, since I am a wordsmith by trade and by training. On the contrary! I think it’s damning of the educational system that someone with degrees in English and linguistics, who reads and writes constantly, has not found it necessary or important to know what a predicate is. The onus is on the educators to prove that it is in fact necessary and important to know this kind of information.

I love language. I love understanding how it works – so much so, in fact, that I suffered through tedious graduate courses in syntax and morphology taught by people who hadn’t had fun in 30 years. But linguists don’t use the term “predicate” (at least not in the way kids are taught to use it). Normal people don’t use it, either. Hell, the only time I ever even refer to the parts of speech nowadays is when I play Mad Libs. 

So if a card-carrying linguist, erstwhile copyeditor, and hammer-wielding wordsmith has no need to know what a predicate is, the question is: Why does the school system, or the state, think my kid needs to know this stuff? Read more »