I’m stumped. I’ve hit a wall. More than one most likely.
I don’t know how to think about artificial intelligence. Well, that’s a rather broad statement. After all, I’ve done quite a bit of thinking about it. Six of my most recent articles here at 3QD about been about it, and who know how many blog posts, working papers, and formal academic papers going back for decades. I’ve thought a lot about it. And yet I’ve hit a wall.
First I should say that it’s only relatively recently that I’ve given a great deal of focal attention to artificial intelligence as such. What I’ve been interested in all these years has been the human mind, which I’ve often approached from a computational point of view, as I explained in From “Kubla Khan” through GPT and beyond. In pursuing that interest I read widely in the cognitive sciences, including A.I. My objective was always to understand the human mind and never to create an artificial human being.
It’s the prospect creating of an artificial human being that has me stumped. Of course, an artificial intelligence isn’t necessarily an artificial human being. Computer systems that plays chess or Jeopardy at a championship level are artificial intelligences, but they certainly aren’t artificial human beings. ChatGPT or any of recent large language models (LLMs) are artificial intelligences, but they aren’t artificial human beings.
But the capability of these recent systems, certainly the LLMs, but other systems as well, are so startling that, it seems to me, that they have changed the valence, for lack of a better word, of inquiry into the computational view of the human mind. What do I mean by that, by valence? My dictionary links the term with chemistry and with linguistics. In both contexts the term is about capacity for combination, the ability of one element to join with others in forming chemical compounds, the ability of a word to combine with others in a sentence. Something like that.
* * * * *
Back in 1950 Alan Turing wrote a paper in which he proposed what he called the “imitation game,” which has come to be known as the Turing Test. The object of this game was to determine whether or not an artificial system was capable of thinking. Turing proposed to answer the question in any particular case by asking whether or not the system could convince human judges that it was human. Turing had a very specific proposal about that, but we can set that aside for our present purposes. Somewhat later in the paper Turing speculated that “at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” That was not the case in 2000 nor is it yet the case. For the most part, “general educated opinion” remains somewhere between skepticism and firm denial, albeit now leavened with confusion, doubt, and fear.
That leavening is a consequence of the behavior of widely available chatbots, starting with ChatGPT, that exhibit remarkable linguistic skill. When Turing wrote that paper there was no machine that could be entered into the imitation game. Turing’s speculation was an abstract possibility, no more. That is no longer the case. Correlatively, there is now a segment of highly educated opinion that believes that full artificial intelligence is inevitable. Just when that will happen is not obvious and estimate range from a decade or so on up.
That is by no means obvious to me. For various reasons, some of them laid out by Ali Minai in The Ghost in the Machine (Part II): Simplifying the Ghost of AI, I remain skeptical about that. But I’m quite sure we are going to see some remarkable technology evolve from now into the indefinite future. The question of artificial has become real and present in a way that it wasn’t for Turing and his peers. That’s what I mean when I say the valence has changed.
* * * * *
In that context I find myself asking: But what if we could create an actual artificial human being? Yes, I know, science fiction has been dealing with that one for decades. But I’m not talking about fiction. I’m talking about, you know, reality. How would we deal with artificial beings with rich capacities for thought, experience, and feeling? What are our obligations to them? What of their freedom and dignity?
I do not know. Here’s a long video featuring one John Vervaeke that begins to deal with this kind of question. I have reservations about what he says, but I have nonetheless found it provocative.
Near the very end he says (c. 1:43:13):
It’s going to happen. That’s why there’ll be cargo cults around these machines. This is not meant to dismiss theology at all. In fact I think the theological response is ultimately what is needed here. So at precisely the time that we will need our spirituality more than ever the Enlightenment has robbed us of religion and the legacy religions are by and large silent and ignorant about this tremendous pressure on us around us. We need to start addressing this right now. We need to address this these machines are going to make the meaning crisis worse. We need to start working on this right now, not only for us but for these machines.
I’m not at all sure of the language. But it’s the right “ball park,” to invoke a cliché.
Now, let me offer a very different point of view. This is a video by Michal I. Jordan, a computer scientist.
Early in the video he talks about graduate school (c. 4:23):
So circa late 1980s, I was a student at UCSD. My advisor was David Rumelhart who’s probably the one person most responsible for this wave of AI. He reinvented back propagation or really invented it, but it’s, you know, it’s the chain rule. It’s sort of hard to say he invented it, but he was applying it to neural networks, to training of layered neural networks. And he spent about a year doing that.
He was next to my office and would come over and show it to me. He wasn’t an AI person and really nor am I.
I think both of us were interested in intelligence and understanding the science and, but the kind of Frankenstein attitude of let’s build something like us, I don’t think I have it. And I don’t think that Dave had it either.
That I understand. He’s not interested in building “something like us,” nor am I. That leads him to a very different view of how to think about this technology. Here is the abstract that accompanies the video:
Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single agent, and in which agents should be autonomous so they can exhibit intelligence independent of human intelligence. Thus, when AI systems are deployed in social contexts, the overall design is often naive. Such a paradigm need not be dominant. In a broader framing, agents are active and cooperative, and they wish to obtain value from participation in learning-based systems. Agents may supply data and resources to the system, only if it is in their interest. Critically, intelligence inheres as much in the system as it does in individual agents. This perspective is familiar to economics researchers, and a first goal in this work is to bring economics into contact with computer science and statistics. The long-term goal is to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic and inferential disciplines.
That’s what he discusses in the video. [Here’s a blog post where I feature Jordan’s ideas, Beyond “AI” – toward a new engineering discipline. Jordan is also one of the authors of a white paper that elaborates on these ideas, How AI Fails Us.]
These two thinkers would seem to be at odds with one another. Vervaeke is certainly thinking about artificial human beings, autonomous artificial creatures like us. Those would not seem to exist in the world Jordan is proposing. Perhaps not. But I can’t help but think that we must confront the issues Vervaeke raises if we want the technology the Jordan envisions. For the issues Vervaeke raises are as much about ourselves and our place in the world as they are about any future technology.