Snake Oil, Vitamins, and Self-Help

by Mark Harvey

Vitamins and self-help are part of the same optimistic American psychology that makes some of us believe we can actually learn the guitar in a month and de-clutter homes that resemble 19th-century general stores. I’m not sure I’ve ever helped my poor old self with any of the books and recordings out there promising to turn me into a joyful multi-billionaire and miraculously develop the sex appeal to land a Margot Robbie. But I have read an embarrassing number of books in that category with embarrassingly little to show for it. And I’ve definitely wasted plenty of money on vitamins and supplements that promise the same thing: revolutionary improvement in health, outlook, and clarity of thought.

On the face of it, there’s nothing wrong with self-help. I think one of the most glorious and heartening visions in the world is that of an extremely overweight man or woman jogging down the side of the road in athletic clothing and running shoes. When I see such a person, I say a little atheist prayer hoping that a year from now they have succeeded with their fitness regime and are gliding down the Boston Marathon, fifty pounds lighter. You never know how they decided to buy a pair of running shoes and begin what has to be an uncomfortable start toward fitness. But if it was a popular book or inspirational YouTube video that nudged them in that direction, then glory be!

The same goes with alcoholics and drug addicts. Chances are, millions are bucked up by a bit of self-help advice from a recovering addict or alcoholic, an inspirational quote they read, and even certain supplements to help their bodies heal from abuse.

But so much of what’s sold as life-changing does little more than eat at a person’s finances in little $25 increments of shiny books and shiny bottles. Sometimes the robberies are bigger—thousands of dollars in the form of fancy seminars, retreats, or involved online classes. There are thousands of versions of snake oil, and there will always be people lining up for some version of it. Read more »



Monday, April 22, 2024

The Irises Are Blooming Early This Year

by William Benzon

I live in Hoboken, New Jersey, across the Hudson River from Midtown Manhattan. I have been photographing the irises in the Eleventh Street flower beds since 2011. So far I have uploaded 558 of those photos to Flickr.

I took most of those photos in May or June. But there is one from April 30, 2021, and three from April 29, 2022. I took the following photograph on Monday, April 15, 2024 at 4:54 PM (digital cameras can record the date and time an image was taken). Why so early in April? Random variation in the weather I suppose.

Irises on the street in Hoboken.

That particular photo is an example of what I like to call the “urban pastoral,” I term I once heard applied to Hart Crane’s The Bridge.

Most of my iris photographs, however, do not include enough context to justify that label. They are just photographs of irises. I took this one on Friday, April 19, 2024 at 3:23 PM. Read more »

Monday, November 27, 2023

Gerrymander Unbound

by Jerry Cayford

Avi Lev, CC BY-SA 4.0, via Wikimedia Commons

A friend of mine covers his Facebook tracks. He follows groups from across the political spectrum so that no one can pigeonhole him. He has friends and former colleagues who, he figures, will be among the armed groups going door to door purging enemies, if our society breaks into civil anarchy. He hides his tracks so no one will know he is the enemy.

That trick might work for the humans, but artificial intelligences (AI) will laugh at such puny human deceptions (if artificial intelligence can laugh). When AI knows every click you make, every page you visit, when you scroll fast or slow or pause, everything you buy, everything you read, everyone you call, and data and patterns on millions like you, well, it will certainly know whom you are likely to vote for, the probability that you will vote at all, and even the degree of certainty of its predictions.

All of that means that AI will soon be every gerrymanderer’s dream.

AI will know not just the party registrations in a precinct but how every individual in a proposed district will (probably) vote. This will allow a level of precision gerrymandering never seen before. There is only one glitch, one defect: with people living all jumbled up together, any map, no matter how complex and salamander-looking, will include some unwanted voters and miss some wanted ones. To get the most lopsided election result possible from a given group of voters—the maximally efficient, maximally unfair outcome—the gerrymanderer has to escape the inconvenience of people’s housing choices. And since relocating voters is not feasible, the solution is to free districts of the tyranny of voter location. The truly perfect gerrymander that AI is capable of producing would need to be a list, instead of a map: a list of exactly which voters the gerrymanderer wants in each district. But that isn’t possible. Is it? Read more »

Monday, September 25, 2023

Grand Observations: Darwin and Fitzroy

by Mark Harvey

Captain Robert Fitzroy

One of the artifacts of modern American culture is the digital clutter that crowds our minds and crowds our days. I’m old enough to have grown up in the era before even answering machines and the glorification of fast information. It’s an era that’s hard to remember because like most Americans, I’ve gotten lost in the sea of immediate “content” and the vast body of information at our fingertips and on our phones. While it’s a delicious feeling to be able to access almost every bit of knowledge acquired by humankind over the last few thousand years, I suspect the resulting mental clutter has in many ways made us just plain dumber. Our little brains can absorb and process a lot of information but digesting the massive amount of data available nowadays has some of our minds resembling the storage units of hoarders: an unholy mess of useless facts and impressions guarded in a dark space with a lost key.

If you consider who our “wise men” and “wise women” are these days, they sure seem dumber than men and women of past centuries. I guess some of them are incredibly clever when it comes to computers, material science, genetic engineering, and the like. But when it comes to big-picture thinking, even the most glorified billionaires just seem foolish. And our batch of politicians even more so.

It’s hard to know the shape and content of the human mind in our millions of years of development but the story goes that we’ve advanced in consciousness almost every century, with major advances in periods such as the Renaissance and the Enlightenment. That may be true for certain individuals but as a whole, it seems we drove right on past the bus stop of higher consciousness with our digital orgy and embryonic embrace of artificial intelligence. Are we losing the wonderful feeling, agency, and utility of uncluttered minds? Read more »

Clever Cogs: Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies

Figure 1: The Porphyrian Tree. Detail of a fresco at the Kloster Schussenried. Image credit: modified from Franz Georg Hermann, Public domain, via Wikimedia Commons.

The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to Nando de Freitas, team lead at DeepMind, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones. Read more »

Monday, June 20, 2022

Exorcising a New Machine

by David Kordahl

A.I.-generated image (from DALL-E Mini), given the text prompt, “computer with a halo, an angel, but digital”

Here’s a brief story about two friends of mine. Let’s call them A. Sociologist and A. Mathematician, pseudonyms that reflect both their professions and their roles in the story. A few years ago, A.S. and A.M. worked together on a research project. Naturally, A.S. developed the sociological theories for their project, and A.M. developed the mathematical models. Yet as the months passed, they found it difficult to agree on the basics. Each time A.M. showed A.S. his calculations, A.S. would immediately generate stories about them, spinning them as illustrations of social concepts he had just now developed. From A.S.’s point of view, of course, this was entirely justified, as the models existed to illustrate his sociological ideas. But from A.M.’s point of view, this pushed out far past science, into philosophy. Unable to agree on the meaning or purpose of their shared efforts, they eventually broke up.

This story was not newsworthy (it’d be more newsworthy if these emissaries of the “two cultures” had actually managed to get along), but I thought of it last week while I read another news story—that of the Google engineer who convinced himself a company chatbot was sentient.

Like the story of my two friends, this story was mostly about differing meanings and purposes. The subject of said meanings and purposes was a particular version of LaMDA (Language Models for Dialog Applications), which, to quote Google’s technical report, is a family of “language models specialized for dialog, which have up to 137 [billion] parameters and are pre-trained on 1.56 [trillion] words of public dialog data and web text.”

To put this another way, LaMDA models respond to text in a human-seeming way because they are created by feeding literal human conversations from online sources into a complex algorithm. The problem with such a training method is that humans online interact with various degrees of irony and/or contempt, which has required Google engineers to further train their models not to be assholes. Read more »

Monday, April 12, 2021

“Responsible” AI

by Fabio Tollon

What do we mean when we talk about “responsibility”? We say things like “he is a responsible parent”, “she is responsible for the safety of the passengers”, “they are responsible for the financial crisis”, and in each case the concept of “responsibility” seems to be tracking different meanings. In the first sense it seems to track virtue, in the second sense moral obligation, and in the third accountability. My goal in this article is not to go through each and every kind of responsibility, but rather to show that there are at least two important senses of the concept that we need to take seriously when it comes to Artificial Intelligence (AI). Importantly, it will be shown that there is an intimate link between these two types of responsibility, and it is essential that researchers and practitioners keep this mind.

Recent work in moral philosophy has been concerned with issues of responsibility as they relate to the development, use, and impact of artificially intelligent systems. Oxford University Press recently published their first ever Handbook of Ethics of AI, which is devoted to tackling current ethical problems raised by AI and hopes to mitigate future harms by advancing appropriate mechanisms of governance for these systems. The book is wide-ranging (featuring over 40 unique chapters), insightful, and deeply disturbing. From gender bias in hiring, racial bias in creditworthiness and facial recognition software, and sexual bias in identifying a person’s sexual orientation, we are awash with cases of AI systematically enhancing rather than reducing structural inequality.

But how exactly should (can?) we go about operationalizing an ethics of AI in a way that ensures desirable social outcomes? And how can we hold those causally involved parties accountable, when the very nature of AI seems to make a mockery of the usual sense of control we deem appropriate in our ascriptions of moral responsibility? These are the two sense of responsibility I want to focus on here: how can we deploy AI responsibly, and how can we hold those responsible when things go wrong. Read more »

Monday, February 15, 2021

GPT-3 Understands Nothing

by Fabio Tollon

It is becoming increasingly common to talk about technological systems in agential terms. We routinely hear about facial recognition algorithms that can identify individuals, large language models (such as GPT-3) that can produce text, and self-driving cars that can, well, drive. Recently, Forbes magazine even awarded GPT-3 “person” of the year for 2020. In this piece I’d like to take some time to reflect on GPT-3. Specifically, I’d like to push back against the narrative that GPT-3 somehow ushers in a new age of artificial intelligence.

GPT-3 (Generative Pre-trained Transformer) is a third-generation, autoregressive language model. It makes use of deep learning to produce human-like texts, such as sequences of words (or code, or other data) after being fed an initial “prompt” which it then aims to complete. The language model itself is trained on Microsoft’s Azure Supercomputer, uses 175 billion parameters (its predecessor used a mere 1.5 billion) and makes use of unlabeled datasets (such as Wikipedia). This training isn’t cheap, with a price tag of $12 million. Once trained, the system can be used in a wide array of contexts: from language translation, summarization, question answering, etc.

Most of you will recall the fanfare that surrounded The Guardians publication of an article that was written by GPT-3. Many people were astounded at the text that was produced, and indeed, this speaks to the remarkable effectiveness of this particular computational system (or perhaps it speaks more to our willingness to project understanding where there might be none, but more on this later). How GPT-3 produced this particular text is relatively simple. Basically, it takes in a query and then attempts to offer relevant answers using the massive amounts of data at its disposal to do so. How different this is, in kind, from what Google’s search engine does is debatable. In the case of Google, you wouldn’t think that it “understands” your searches. With GPT-3, however, people seemed to get the impression that it really did understand the queries, and that its answers, therefore, were a result of this supposed understanding. This of course lends far more credence to its responses, as it is natural to think that someone who understands a given topic is better placed to answer questions about that topic. To believe this in the case of GPT-3 is not just bad science fiction, it’s pure fantasy. Let me elaborate. Read more »

Monday, October 5, 2020

Analogia: A Conversation with George Dyson

by Ashutosh Jogalekar

George Dyson is a historian of science and technology who has written books about topics ranging from the building of a native kayak (“Baidarka”) to the building of a spaceship powered by nuclear bombs (“Project Orion”). He is the author of the bestselling books “Turing’s Cathedral” and “Darwin Among the Machines” which explore the multifaceted ramifications of intelligence, both natural and artificial. George is also the son of the late physicist, mathematician and writer Freeman Dyson, a friend whose wisdom and thinking we both miss.

George’s latest book is called “Analogia: The Emergence of Technology Beyond Programmable Human Control”. It is in part a fascinating and wonderfully eclectic foray into the history of diverse technological innovations leading to the promises and perils of AI, from the communications network that allowed the United States army to gain control over the Apache Indians to the invention of the vacuum tube to the resurrection of analog computing. It is also a deep personal exploration of George’s own background in which he lived in a treehouse and gained mastery over the ancient art of Aleut baidarka building. I am very pleased to speak with George about these ruminations. I would highly recommend that readers listen to the entire conversation, but if you want to jump to snippets of specific topics, you can click on the timestamps below, after the video.

7:51 We talk about lost technological knowledge. George makes the point that it’s really the details that matter, and through the gradual extinction of practitioners and practice we stand in real danger of losing knowledge that can elevate humanity. Whether it’s the art of building native kayaks or building nuclear bombs for peaceful purposes, we need ways to preserve the details of knowledge of technology.

12:49 Digital versus analog computing. The distinction is fuzzy: As George says, “You can have digital computers made out of wood and you can have analog computers made out of silicon.” We talk about how digital computing became so popular in part because it was so cheap and made so much money. Ironically, we are now witnessing the growth of giant analog network systems built on a digital substrate.

21:22 We talk about Leo Szilard, the pioneering, far-sighted physicist who was the first to think of a nuclear chain reaction while crossing a traffic light in London in 1933. Szilard wrote a novel titled “The Voice of the Dolphins” which describes a group of dolphins trying to rescue humanity from its own ill-conceived inventions, an oddly appropriate metaphor for our own age. George talks about the formative influence of Trudy Szilard, Leo’s wife, who used to snatch him out of boring school lessons and take him to lunch, where she would have a pink martini and they would talk. Read more »

Monday, August 31, 2020

Are We Asking the Right Questions About Artificial Moral Agency?

by Fabio Tollon

Human beings are agents. I take it that this claim is uncontroversial. Agents are that class of entities capable of performing actions. A rock is not an agent, a dog might be. We are agents in the sense that we can perform actions, not out of necessity, but for reasons. These actions are to be distinguished from mere doings: animals, or perhaps even plants, may behave in this or that way by doing things, but strictly speaking, we do not say that they act.

It is often argued that action should be cashed out in intentional terms. Our beliefs, what we desire, and our ability to reason about these are all seemingly essential properties that we might cite when attempting to figure out what makes our kind of agency (and the actions that follow from it) distinct from the rest of the natural world. For a state to be intentional in this sense it should be about or directed towards something other than itself. For an agent to be a moral agent it must be able to do wrong, and perhaps be morally responsible for its actions (I will not elaborate on the exact relationship between being a moral agent and moral responsibility, but there is considerable nuance in how exactly these concepts relate to each other).

In the debate surrounding the potential of Artificial Moral Agency (AMA) this “Standard View” presented above is often a point of contention. The ubiquity of artificial systems in our lives can often lead to us believing that these systems are merely passive instruments. However, this is not always necessarily the case. It is becoming increasingly clear that intuitively “passive” systems, such as recommender algorithms (or even email filter bots), are very receptive to inputs (often by design). Specifically, such systems respond to certain inputs (user search history, etc.) in order to produce an output (a recommendation, etc.). The question that emerges is whether such kinds of “outputs” might be conceived of as “actions”. Moreover, what if such outputs have moral consequences? Might these artificial systems be considered moral agents? This is not to necessarily claim that recommender systems such as YouTube’s are in fact (moral) agents, but rather to think through whether this might be possible (now or in the future). Read more »

Monday, September 9, 2019

Are we being manipulated by artificially intelligent software agents?

by Michael Klenk

Someone else gets more quality time with your spouse, your kids, and your friends than you do. Like most people, you probably enjoy just about an hour, while your new rivals are taking a whopping 2 hours and 15 minutes each day. But save your jealousy. Your rivals are tremendously charming, and you have probably fallen for them as well.

I am talking about intelligent software agents, a fancy name for something everyone is familiar with: the algorithms that curate your Facebook newsfeed, that recommend the next Netflix film to watch, and that complete your search query on Google or Bing.

Your relationships aren’t any of my business. But I want to warn you. I am concerned that you, together with the other approximately 3 billion social media users, are being manipulated by intelligent software agents online.

Here’s how. The intelligent software agents that you interact with online are ‘intelligent agents’ in the sense that they try to predict your behaviour taking into account what you did in your online past (e.g. what kind of movies you usually watch), and then they structure your options for online behaviour. For example, they offer you a selection of movies to watch next.

However, they do not care much for your reasons for action. How could they? They analyse and learn from your past behaviour, and mere behaviour does not reveal reasons. So, they likely do not understand what your reasons are and, consequently, cannot care for it.

Instead, they are concerned with maximising engagement, a specific type of behaviour. Intelligent software agents want you to keep interacting with them: To watch another movie, to read another news-item, to check another status update. The increase in the time we spend online, especially on social media, suggests that they are getting quite good at this. Read more »

Monday, March 13, 2017

Artificial Stupidity

by Ali Minai

"My colleagues, they study artificial intelligence; me, I study natural stupidity." —Amos Tversky, (quoted in “The Undoing Project” by Michael Lewis).

Humans-vs-AINot only is this quote by Tversky amusing, it also offers profound insight into the nature of intelligence – real and artificial. Most of us working on artificial intelligence (AI) take it for granted that the goal is to build machines that can reason better, integrate more data, and make more rational decisions. What the work of Daniel Kahneman and Amos Tversky shows is that this is not how people (and other animals) function. If the goal in artificial intelligence is to replicate human capabilities, it may be impossible to build intelligent machines without "natural stupidity". Unfortunately, this is something that the burgeoning field of AI has almost completely lost sight of, with the result that AI is in danger of repeating the same mistakes in the matter of building intelligent machines as classical economists have made in their understanding of human behavior. If this does not change, homo artificialis may well end up being about as realistic as homo economicus.

The work of Tversky and Kahneman focused on showing systematically that much of intelligence is not rational. People don’t make all decisions and inferences by mathematically or logically correct calculation. Rather, they are made based on rules of thumb – or heuristics – driven not by analysis but by values grounded in instinct, intuition and emotion: Kludgy short-cuts that are often “wrong” or sub-optimal, but usually “good enough”. The question is why this should be the case, and whether it is a “bug” or a “feature”. As with everything else about living systems, Dobzhansky’s brilliant insight provides the answer: This too makes sense only in the light of evolution.

The field of AI began with the conceit that, ultimately, everything is computation, and that reproducing intelligence – even life itself – was only a matter of finding the “correct” algorithms. As six decades of relative failure have demonstrated, this hypothesis may be true in an abstract formal sense, but is insufficient to support a practical path to truly general AI. To elaborate Feynman, Nature’s imagination has turned out to be much greater than that of professors and their graduate students. The antidote to this algorithm-centered view of AI comes from the notion of embodiment, which sees mental phenomena – including intelligence and behavior – as emerging from the physical structures and processes of the animal, much as rotation emerges from a pinwheel when it faces a breeze. From this viewpoint, the algorithms of intelligence are better seen, not as abstract procedures, but as concrete dynamical responses inherent in the way the structures of the organism – from the level of muscles and joints down to molecules – interact with the environment in which they are embedded.

Read more »

Monday, August 31, 2015

Fearing Artificial Intelligence

by Ali Minai

ScreenHunter_1341 Aug. 31 10.48Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what “they” have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved “human-level intelligence”, or that a robot called Eugene had “passed the Turing Test“. Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?

Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.

The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.

In this article, I will consider these two cases separately.

Read more »

Cerebral Imperialism

Neurons The present is where the future comes to die, or more accurately, where an infinite array of possible futures all collapse into one. We live in a present where artificial intelligence hasn't been invented, despite a quarter century of optimistic predictions. John Horgan in Scientific American suggests we're a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass). However and whenever (or if ever) it arrives, it's an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

___________________________________

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James “J” Hughes. One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some Transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

You’re right. A lot of our Transhumanist subculture comes out of computer science— male computer science—so a lot of them have that traditional “intelligence is everything” view. s soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer – one of many Hughes offers in the interview – but I was going somewhere else: toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Read more »