
by David J. Lobina
A specter is haunting Artificial Intelligence (AI) – the specter of the environmental costs of Machine/Deep Learning. As Neural Networks have by now become ubiquitous in modern AI applications, the gains the industry has seen in applying Deep Neural Networks (DNNs) to solve ever more complex problems come at a high price. Indeed, the quantities of computational power and data needed to train networks have increased greatly in the last 5 or so years. At the current pace of development, this may well translate into unsustainable amounts of power consumption and carbon emissions in the long run, though the actual cost is hard to gauge, with all the secrecy and the rest of it. According to one estimate, nevertheless, improving cutting-edge DNNs to reduce error rates without expert input for correction could cost US$100 billion and produce as much carbon emissions as New York City does in a month – an impossibly expensive and clearly unethical proposition.
Once upon a time, though, advances in so-called neuromorphic computing, including the commercialization of neuromorphic chips by Intel, with its Loihi 2 boards and related software, hinted at a future of ultra-low power but high-performance AI applications. These models were of some interest to me not long ago too, given that they seem to model the human brain in a closer fashion than contemporary DNNs such as Large Language Models (LLMs) can, the paradigm that dominates academia and industry so completely today.
What are these models again?
In effect, LLMs are neural networks designed to track the various correlations between inputs and outputs within the dataset they are fed during training. This is done in order to build models from which they can then generate specific strings of words given other strings of words as a prompt. Underlain by the “pattern finding” algorithms at the heart of most modern AI approaches, deep neural networks have proven to be very useful in all sort of tasks – language generation, image classification, video generation, etc. Read more »





It doesn’t take a lot of effort to be a bootlicker. Find a boss or someone with the personality of a petty tyrant, sidle up to them, subjugate yourself, and find something flattering to say. Tell them they’re handsome or pretty, strong or smart, and make sweet noises when they trot out their ideas. Literature and history are riddled with bootlickers: Thomas Cromwell, the advisor to Henry VIII, Polonius in Hamlet, Mr. Collins in Pride and Predjudice, and of course Uriah Heep in David Copperfield.
There is something repulsive about lickspittles, especially when all the licking is being done for political purposes. It’s repulsive when we see it in others and it’s repulsive when we see it in ourselves It has to do with the lack of sincerity and the self-abasement required to really butter someone up. In the animal world, it’s rolling onto your back and exposing the vulnerable stomach and throat—saying I am not a threat.




Risham Syed. The Heavy Weights, 2008.
Despite the fact that Newcomb’s paradox was discovered in 1960, I’ve been prompted to discuss it now for three reasons, the first being its inherent interest and counterintuitive conclusions. The two other factors are topical. One is a scheme put forth by Elon Musk in which he offered a small prize to people who publicly approved of the free speech and gun rights clauses in the Constitution. Doing so, he announced, would register them and make them eligible for a daily giveaway of a million dollars provided by him (an almost homeopathic fraction of his 400 billion dollar fortune). The other topic is the rapid rise in AI’s abilities, especially in AGI (Artificial General Intelligence). Soon enough it will be able, somewhat reliably, to predict our behaviors, at least in some contexts.




My 2024 ends with a ceremony of sorts. On December 31st, I’m sitting in a hotel in Salt Lake City an hour before midnight. I’m looking at my phone and I have it opened to Tinder.
I read the opening of Peter Handke’s A Sorrow Beyond Dreams and immediately thought of Camus’ The Stranger. Here is how Handke begins: