Artificial General What?

by Tim Sommers

One thing that Elon Musk and Bill Gates have in common, besides being two of the five richest people in the world, is that they both believe that there is a very serious risk that an AI more intelligent than them – and, so, more intelligent than you and I, obviously – will one day take over, or destroy, the world. This makes sense because in our society how smart you are is well-known to be the best predictor of how successful and powerful you will become. But, you may have noticed, it’s not only the richest people in the world that worry about an AI apocalypse. One of the “Godfathers of AI,” Geoff Hinton recently said “It’s not inconceivable” that AI will wipe out humanity. In a response linked to by 3 Quarks Daily, Gary Marcus, a neuroscientist and founder of a machine learning company, asked whether the advantages of AI were sufficient for us to accept a 1% chance of extinction. This question struck me as eerily familiar.

Do you remember who offered this advice? “Even if there’s a 1% chance of the unimaginable coming due, act as it is a certainty.”

That would be Dick Cheney as quoted by Ron Suskin in “The One Percent Doctrine.” Many regard this as the line of thinking that led to the Iraq invasion. If anything, that’s an insufficiently cynical interpretation of the motives behind an invasion that killed between three hundred thousand and a million people and found no weapons of mass destruction. But there is a lesson there. Besides the fact that “inconceivable” need not mean 1% – but might mean a one following a googolplex of zeroes [.0….01%] – trying to react to every one-percent probable threat may not be a good idea. Therapists have a word for this. It’s called “catastrophizing.” I know, I know, even if you are catastrophizing, we still might be on the brink of catastrophe. “The decline and fall of everything is our daily dread,” Saul Bellow said. So, let’s look at the basic story that AI doomsayers tell. Read more »



Hallucinating AI: The devil is in the (computational) details

by Robyn Repko Waller

Image by NoName_13 from Pixabay

AI has a proclivity for exaggeration. This hallucination is integral to its success and its danger.

Much digital ink has been spilled and computational resources consumed as of late in the too rapidly advancing capacities of AI.

Large language models like GPT-4 heralded as a welcome shortcut for email, writing, and coding. Worried discussion for the implications for pedagogical assessment — how to codify and detect AI plagiarism. Open-AI image generation to rival celebrated artists and photographers. And what of the convincing deep fakes?

The convenience of using AI to innovate and make efficient our social world and health, from Tiktok to medical diagnosis and treatment. Continued calls, though, for algorithmic fairness in the use of algorithmic decision-making in finance, government, health, security, and hiring.

Newfound friends, therapists, lovers, and enemies of an artificial nature. Both triumphant and terrified exclamations and warnings of sentient, genuinely intelligent AI. Serious widespread calls for a pause in development of these AI systems. And, in reply, reports that such exclamations and calls are overblown: Doesn’t intelligence require experience? Embodiment?

These are fascinating and important matters. Still, I don’t intend to add to the much-warranted shouting. Instead, I want to draw attention to a curious, yet serious, corollary of the use of such AI systems, the emergence of artificial or machine hallucinations. By such hallucinations, folks mean the phenomenon by which AI systems, especially those driven by machine learning, generate factual inaccuracies or create new misleading or irrelevant content. I will focus on one kind of hallucination, the inherent propensity of AI to exaggerate and skew. Read more »

The Great Pretender: AI and the Dark Side of Anthropomorphism

by Brooks Riley

‘Wenn möglich, bitte wenden.’

That was the voice of the other woman in the car, ‘If possible, please turn around.’ She was nowhere to be seen in the BMW I was riding in sometime in the early aughts, but her voice was pleasant—neutral, polite, finely modulated and real.  She was the voice of the navigation system, a precursor of the chatbot—without the chat. You couldn’t talk back to her. All she knew about you was the destination you had typed into the system.

‘Wenn möglich, bitte wenden.’

She always said this when we missed a turn, or an exit. Since we hadn’t followed her suggestion the first time, she asked us again to turn around. There were reasons not to take her advice. If we were on the autobahn, turning around might be deadly. More often, we just wanted her to find a new route to our destination.

The silence after her second directive seemed excessive—long enough for us to get the impression that she, the ‘voice’, was sulking. In reality, the silence covered the period of time the navigation system needed to calculate a new route. But to ears that were attuned to silent treatments speaking volumes, it was as if Frau GPS was mightily miffed that we hadn’t turned around.

Recent encounters with the Bing chatbot have jogged my memory of that time of relative innocence, when a bot conveyed a message, nothing more. And yet, even that simple computational interaction generated a reflex anthropomorphic response, provoked by the use of language, or in the case of the pregnant silence, the prolonged absence of it. Read more »

Not Another Lamb

by Eric Bies

Everyone is talking about artificial intelligence. This is understandable: AI in its current capacity, which we so little understand ourselves, alternately threatens dystopia and promises utopia. We are mostly asking questions. Crucially, we are not asking so much whether the risks outweigh the rewards. That is because the relationship between the first potential and the second is laughably skewed. Most of us are already striving to thrive; whether increasingly superintelligent AI can help us do that is questionable. Whether AI can kill all humans is not.

So laymen like me potter about and conceive, no longer of Kant’s end-in-itself, but of some increasingly possible end-to-it-all. No surprise, then, when notions of the parallel and the alternate grow more and more conspicuous in our cultural moment. We long for other science-fictional universes.

Happily, then, did the news of the FDA’s approval of one company’s proprietary blend of “cultured chicken cell material” greet my wonky eyes last month.

“Too much time and money has been poured into the research and development of plant-based meat,” I thought. “It’s time we focused our attention on meat-based meat.”

When I shared this milestone with my students—most of them high school freshmen—opinions were split. Like AI, lab-grown meat was quick to take on the Janus-faced contour of promise and threat. The regulatory thumbs-up to GOOD Meat’s synthetic chicken breast was, for some students, evidence of our steady march, aided by science, into a sustainable future. For other students, it was yet another chromium-plated creepy-crawly, an omen of more bad to come. Read more »

Monday, April 10, 2023

Tribal Waters and The Supreme Court

by Mark Harvey

After we get back to our country, black clouds will rise and there will be plenty of rain. Corn will grow in abundance and everything [will] look happy. –Barboncito, Navajo Leader, 1868

Barboncito, Navajo Leader, circa 1868

My idea of a fun evening is listening to the oral arguments of a contentious dispute that has reached the Supreme Court. As much as I disagree with some of the justices, I must admit that almost all of them are wickedly sharp at analyzing the issues—the facts and the law—of every case that comes before them. I don’t always get how they arrive at their final votes on cases that seem cut and dried before their probing inquiry. But most of them can flay a poorly presented argument with all the efficiency of a seasoned hunter field-dressing a kill.

So it was with the recent hearing on Arizona v. The Navajo Nation, heard before the court this year on March 20. At stake, in this case, is what responsibility the US government does or doesn’t have in formally assessing the Navajo Nation’s need for water and then developing a plan to meet those needs. The brief on behalf of the Navajo people, Diné as they prefer to be called, puts the case in stark and unmistakable terms: “This case is about this promise of water to this tribe under these treaties, signed after these particular negotiations reflecting this tribe’s understanding. A promise is a promise.”

The promise referred to in the brief refers to a promise made about 150 years ago when the Diné signed a treaty in 1868 with the US Government to establish the Navajo Reservation as a “permanent home” where it sits today. The treaty is only seven pages long and it promises the Diné a permanent home in exchange for giving up their nomadic life, staying within the reservation boundaries, and allowing whites to build railways and forts throughout the reservation as they see fit. A lot of things were left out—like water rights. Read more »

Monday Poem

Trying to make Sense of Red
—A Tennessee Cleanup

Its janitors are sweeping up its sins—
senators are on the floor with whisks and
fine-toothed combs. They crawl and sift,
scooping, collecting photographs
of those they’ve lynched
they cram them into
rubbish bins

before their kids get wind

they ban   their two-faced history,
they ban   before their children come to know,
they ban   before their jittering pot lid blows

but

maybe    they skew and hack the rules
..  ………  to save kids from what history shows,
maybe    they obfuscate for mercy’s sake,
maybe  before their children come to see and judge,
maybe.. .they’d rather have them shot in schools

Jim Culleny, 4/8/23

Are Mass Media and Democracy Compatible?

by Mindy Clegg

Herman’s and Chomsky’s classic work on American Mass Media!

In their oft-cited classic examination of the modern mass media, Manufacturing Consent, Edward Herman and Noam Chomsky described modern American news media thusly: “The mass media serve as a system for communicating messages and symbols to the general populace. It is their function to amuse, entertain, and inform, and to inculcate individuals with the values, beliefs, and codes of behavior that will integrate them into the institutional structures of the larger society. In a world of concentrated wealth and major conflicts of class interest, to fulfill this role requires systematic propaganda.”1 In other words, democratic states use privately-owned media as a means of social control. Private corporations own and operate media outlets and they work with the US government because the power of the state dovetailed with their own economic interests.

The groundwork for this state of affairs emerged out of intellectual discourse in the early days of mass media. In the wake of the first world war, prominent intellectuals like Walter Lippmann and Edward Bernays suggested a set of strategies for channeling democratic impulses expanding in the United States to better align with the wishes of the ruling classes.2 Such analysis was and continues to be necessary, as many are unaware of the very real pitfalls of corporate media in democratic societie. These systems are now often globalized which shape our understanding of the past and present that we must understand in hopes of changing them. But we must also wonder if the singular focus on these systems of control lead to the feelings of hopelessness that many of us feel about our institutions these days. As much as describing what dominates us feels cathartic, focusing only on the systems of control and not on resistance makes the problem seem insurmountable. I argue that we need to look for the cracks as much as describe the problem posed by corporate medi. Understanding the democratic alternatives within and outside of the mainstream production of popular culture can help us to see these cracks. Read more »

What IS a Natural Language, so that Language Models could learn it (and cognitive scientists stayed sane)?

by David J. Lobina

Language as a sound-meaning mapping.

The hype surrounding Large Language Models remains unbearable when it comes to the study of human cognition, no matter what I write in this Column about the issue – doesn’t everyone read my posts? I certainly do sometimes.

Indeed, this is my fourth, successive post on the topic, having already made the points that Machine/Deep Learning approaches to Artificial Intelligence cannot be smart or sentient, that such approaches are not accounts of cognition anyway, and that when put to the test, LLMs don’t actually behave like human beings at all (where? In order: here, here, and here).[i]

But, again, no matter. Some of the overall coverage on LLMs can certainly be ludicrous (a covenant so that future, sentient computer programs have their rights protected?), and even delirious (let’s treat AI chatbots as we treat people, with radical love?), and this is without considering what some tech charlatans and politicians have said about these models. More to the point here, two recent articles from some cognitive scientists offer quite the bloated view regarding what LLMs can do and contribute to the study of language, and a discussion of where these scholars have gone wrong will, hopefully, make me sleep better at night.

One Pablo Contreras Kallens and two colleagues have it that LLMs constitute an existence proof (their choice of words) that the ability to produce grammatical language can be learned from exposure to data alone, without the need to postulate language-specific processes or even representations, with clear repercussions for cognitive science.[ii]

And one Steven Piantadosi, in a wide-ranging (and widely raging) book chapter, claims that LLMs refute Chomsky’s approach to language, and in toto no less, given that LLMs are bona fide (his choice of words) theories of language; these models have developed sui generis representations of key linguistic structures and dependencies, thereby capturing the basic dynamics of human language and constituting a clear victory for statistical learning in so doing (Contreras Kallens and co. get a tip of the hat here), and in any case Chomskyan accounts of language are not precise or formal enough, cannot be integrated with other fields of cognitive science, have not been empirically tested, and moreover…(oh, piantala).[iii] Read more »

A Cautionary Note: The Chinese Room Experiment, ChatGPT, and Paperclips

by John Allen Paulos

Despite many people’s apocalyptic response to ChatGPT, a great deal of caution and skepticism is in order. Some of it is philosophical, some of it practical and social. Let me begin with the former.

Whatever its usefulness, we naturally wonder whether CharGPT and its near relatives understand language and, more generally, whether they demonstrate real intelligence. The Chinese Room thought experiment, a classic argument put forward by philosopher John Searle in 1980, somewhat controversially maintains that the answer is No. It is a refinement of arguments of this sort that go back to Leibniz.

In his presentation of the argument (very roughly sketched here), Searle first assumes that research in artificial intelligence has, contrary to fact, already managed to design a computer program that seems to understand Chinese. Specifically, the computer responds to inputs of Chinese characters by following the program’s humongous set of detailed instructions to generate outputs of other Chinese characters. It’s assumed that the program is so good at producing appropriate responses that even Chinese speakers find it to be indistinguishable from a human Chinese speaker. In other words, the computer program passes the so-called Turing test, but does even it really understand Chinese?

The next step in Searle’s argument asks us to imagine a man completely innocent of the Chinese language in a closed room, perhaps sitting behind a large desk in it. He is supplied with an English language version of the same elaborate set of rules and protocols the computer program itself uses for correctly manipulating Chinese characters to answer questions put to it. Moreover, the man is directed to use these rules and protocols to respond to questions written in Chinese that are submitted to him through a mail slot in in the wall of the room. Someone outside the room would likely be quite impressed with the man’s responses to the questions posed to him and what seems to be the man’s understanding of Chinese. Yet all the man is doing is blindly following the same rules that govern the way the sequences of symbols in the questions should be responded to in order to yield answers.  Clearly the man could process the Chinese questions and produce answers to them without any understanding any of the Chinese writing. Finally, Searle drops the mic by maintaining that both the man and the computer itself have no knowledge or understanding of Chinese. Read more »

Could Be Worse

by Mike Bendzela

[This will be a two-part essay.]

Brain MRI from Public Domain.

Ischemia

When the burly, bearded young man climbs into the bed with my husband, I scooch up in my plastic chair to get a better view. On a computer screen nearby, I swear I am seeing, in grainy black-and-white, a deep-sea creature, pulsing. There is a rhythmic barking sound, like an angry dog in the distance. With lights dimmed and curtains drawn in this mere alcove of a room, the effect is most unsettling. That barking sea creature would be Don’s cardiac muscle.

It is shocking to see him out of his work boots, dungarees, suspenders, and black vest, wearing instead a wraparound kitchen curtain for a garment. He remains logy and quiet while the young man holds a transducer against his chest and sounds the depths of his old heart, inspecting valves, ventricles, and vessels for signs of blood clots. This echocardiogram is part of the protocol, even though they are pretty sure the stroke has been caused by atherosclerosis in a cerebral artery.

The irony of someone like Don being held in such a room, amidst all this high-tech equipment, is staggering. He is a traditional cabinetmaker by trade and an enthusiast of 19th century technologies, especially plumbing systems and mechanical farm equipment. He embarked on a career as an Industrial Arts teacher in Maine in the 1970s but abandoned that gig during his student teaching days when he decided it was “mostly babysitting, not teaching.” The final break came when he discovered that one of his students could not even write his own name, and his superiors just said, “Move him along.”

In the dim quiet, while the technician probes Don’s chest, I mull over the conversation we just had with two male physicians. They had come into the room and introduced themselves as neurologists—Doctors Frick & Frack, for all I remember. Read more »

Climate Change Where I Live

by Mary Hrovat

Sunset, McCormick’s Creek SP, March 29, 2023

McCormick’s Creek State Park is one of my favorite hiking spots. The creek flows through a little canyon with a waterfall in a beautiful wooded area. I’ve been visiting the park for more than 40 years. It’s a constant in my life, whether the waterfall is roaring in flood or slowed to a trickle during a dry spell or, once in a great while, frozen solid. 

Late in the evening of Friday, March 31, an EF3 tornado struck the campground in the park. It caused considerable damage, and two people were killed. After the tornado left the park, it seriously damaged several homes in a rural area just outside Bloomington. It was part of an outbreak of 22 tornadoes throughout the state. A tornado that hit Sullivan, Indiana, destroyed or damaged homes and killed three people. 

It was tough to see spring beginning with such serious damage and loss of life in a beloved spot. It was also sobering to see photographs of the destruction in Sullivan. It seems that I’ve seen many such images from places to the south and southwest of us this winter, and in fact 2023 has been an unusually active year for tornadoes in the U.S. so far. There have already been more tornado fatalities in 2023 than in all of 2022 nationwide.  Read more »

Poem

Prophets on the Nairobi Expressway

by Rafiq Kathwari

“Please take the next flight to Nairobi,”
my niece said, her voice cracking over

WhatsApp. “Mom is in ICU. Lemme know
what time your flight lands. I’ll send the car.”

Early February morning on the Upper West Side,
I wore a parka, pashmina scarf, cap, gloves, rode

the A-Train to JFK, boarded Kenya Airways,
and 12 hours later

even before we landed at NBO, I peeled off my
layers anticipating equatorial warmth, the sun

at its peak, mid-afternoon. I waved at a tall, lean
man holding up RAFIKI scrawled on cardboard.

“Welcome,” he said.
“What’s your name?” I asked.

“Moses,” he said as we flew on the Expressway,
built by the Chinese.

“Oh,” I said. “My middle name is Mohammed.
Let’s look for Jesus and resurrect my sister.”

Sea Star Wasting Syndrome and Kelp Forest Collapse in the Northeast Pacific

by David Greer

Sunflower sea star (Pycnopodia helianthoides). Ink and watercolor, by Susan Taylor. Courtesy of the artist.

During the past decade, an environmental calamity has been gradually unfolding along the shores of North America’s Pacific coast. In what has been described as one of the largest recorded die-offs in history of a marine animal, the giant sunflower sea star (Pycnopodia helianthoides) has almost entirely disappeared from its range extending from Alaska’s Aleutian Islands to Baja California, its population of several billion having largely succumbed to a disease of undetermined cause but heightened and accelerated by a persistent marine heatwave of unprecedented intensity.

Equally tragic has been the collapse of kelp forests overwhelmed by the twin impact of elevated ocean temperatures close to shore and of the explosion of sea urchin populations, unchecked in their voracious grazing of kelp following the virtual extinction of their own primary predator, the sunflower sea star. One of the most productive ecological communities in the world, kelp forests act as nurseries for juvenile fish and other marine life in addition to sequestering carbon absorbed by the ocean. It took only a handful of years for most of the kelp to disappear, replaced by barren stretches of seabed densely carpeted by spiny sea urchins, themselves starving after reducing their main food supply to virtually nothing. When a keystone species abruptly vanishes from an ecosystem, the ripple effects can be far-reaching and catastrophic. Read more »

Monday, April 3, 2023

Thinking Through the Risks of AI

by Ali Minai

How intelligent is ChatGPT? That question has loomed large ever since OpenAI released the chatbot late in 2022. The simple answer to the question is, “No, it is not intelligent at all.” That is the answer that AI researchers, philosophers, linguists, and cognitive scientists have more or less reached a consensus on. Even ChatGPT itself will admit this if it is in the mood. However, it’s worth digging a little deeper into this issue – to look at the sense in which ChatGPT and other large language models (LLMs) are or are not intelligent, where they might lead, and what risks they might pose regardless of whether they are intelligent. In this article, I make two arguments. First, that, while LLMs like ChatGPT are not anywhere near achieving true intelligence, they represent significant progress towards it. And second, that, in spite of – or perhaps even because of – their lack of intelligence, LLMs pose very serious immediate and long-term risks. To understand these points, one must begin by considering what LLMs do, and how they do it.

Not Your Typical Autocomplete

As their name implies, LLMs focus on language. In particular, given a prompt – or context – an LLM tries to generate a sequence of sensible continuations. For example, given the context “It was the best of times; it was the”, the system might generate “worst” as the next word, and then, with the updated context “It was the best of times; it was the worst”, it might generate the next word, “of” and then “times”. However, it could, in principle, have generated some other plausible continuation, such as “It was the best of times; it was the beginning of spring in the valley” (though, in practice, it rarely does because it knows Dickens too well). This process of generating continuation words one by one and feeding them back to generate the next one is called autoregression, and today’s LLMs are autoregressive text generators (in fact, LLMs generate partial words called tokens which are then combined into words, but that need not concern us here.) To us – familiar with the nature and complexity of language – this seems to be an absurdly unnatural way to produce linguistic expression. After all, real human discourse is messy and complicated, with ambiguous references, nested clauses, varied syntax, double meanings, etc. No human would concede that they generate their utterances sequentially, one word at a time. Read more »

Open Letter Season: Large Language Models and the Perils of AI

by Fabio Tollon and Ann-Katrien Oimann

DALL·E 2 generated image

Getting a handle on the impacts of Large Language Models (LLMs) such as GPT-4 is difficult.  These LLMs have raised a variety of ethical and regulatory concerns: problems of bias in the data set, privacy concerns for the data that is trawled in order to create and train the model in the first place, the resources used to train the models, etc. These are well-worn issues, and have been discussed at great length, both by critics of these models and by those who have been developing them.

What makes the task of figuring out the impacts of these systems even more difficult is the hype that surrounds them. It is often difficult to sort fact from fiction, and if we don’t have a good idea of what these systems can and can’t do, then it becomes almost impossible to figure out how to use them responsibly. Importantly, in order to craft proper legislation at both national and international levels we need to be clear about the future harm these systems might cause and ground these harms in the actual potential that these systems have.

In the last few days this discourse has taken an interesting turn. The Future of Life Institute (FLI) published an open letter (which has been signed by thousands of people, including eminent AI researchers) calling for a 6-month moratorium on “Giant AI Experiments”. Specifically, the letter calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. Quite the suggestion, given the rapid progress of these systems.

A few days after the FLI letter, another Open Letter was published, this time by researchers in Belgium (Nathalie A. Smuha, Mieke De Ketelaere, Mark Coeckelbergh, Pierre Dewitte and Yves Poullet). In the Belgian letter, the authors call for greater attention to the risk of emotional manipulation that chatbots, such as GPT-4, present (here they reference the tragic chatbot-incited suicide of a Belgian man). In the letter the authors outline some specific harms these systems bring about, advocate for more educational initiatives (including awareness campaigns to better inform people of the risks), a broader public debate, and urgent stronger legislative actions. Read more »