by John Allen Paulos
Despite the fact that Newcomb’s paradox was discovered in 1960, I’ve been prompted to discuss it now for three reasons, the first being its inherent interest and counterintuitive conclusions. The two other factors are topical. One is a scheme put forth by Elon Musk in which he offered a small prize to people who publicly approved of the free speech and gun rights clauses in the Constitution. Doing so, he announced, would register them and make them eligible for a daily giveaway of a million dollars provided by him (an almost homeopathic fraction of his 400 billion dollar fortune). The other topic is the rapid rise in AI’s abilities, especially in AGI (Artificial General Intelligence). Soon enough it will be able, somewhat reliably, to predict our behaviors, at least in some contexts.
With this prologue, let me get to Newcomb’s paradox, which is a puzzle that suggests that the rational thing to do in some situations results in an outcome much worse than doing what doesn’t make sense.
As mentioned, it was first reported in 1960 by William Newcomb, a physicist at the University of California, but it was developed and popularized by the philosopher Robert Nozick in 1969.
The puzzle involves an assumed entity of some sort – a visitor from an advanced civilization, a robot with access to lightning fast computers, an all-knowing network of AI enhanced neural agents, whatever – that has the financial backing of a multi-billionaire. This billionaire claims that his ultra-sapient agent can predict with good accuracy which of two specific alternatives presented to a person he or she will choose. The billionaire further announces a sort of online lottery to demonstrate the agent’s abilities.
He explains that the agent’s assessment of people will utilize two types of boxes. Boxes of type A are transparent and all contain $1,000, whereas boxes of type B are opaque and contain either $0 or $1,000,000, the cash prizes provided by the billionaire, of course. Read more »






My 2024 ends with a ceremony of sorts. On December 31st, I’m sitting in a hotel in Salt Lake City an hour before midnight. I’m looking at my phone and I have it opened to Tinder.
I read the opening of Peter Handke’s A Sorrow Beyond Dreams and immediately thought of Camus’ The Stranger. Here is how Handke begins:

Many environmentalists find the climate change policy problem baffling. The core mechanism of how certain molecules create a greenhouse warming effect on the earth is extremely clear (and has been known for
Since 2010
Philip Graham: 
When I think about AI, I think about poor Queen Elizabeth.
Sughra Raza. Self Portrait, Kigali, January 17, 2016.
As someone who thinks about AI day-in and day-out, it is always fascinating to see which events in the AI space break out of the AI bubble and into the attention of the wider public. ChatGPT in November 2022 was of course one. The 
