by David J. Lobina
Where was I? Last month I made the point that Artificial Intelligence (AI) – or, more appropriately, Machine Learning and Deep Learning, the actual paradigms driving the current hype in AI – is doomed to be forever inanimate (i.e., lack sentience) and dumb (i.e., not smart in the sense that humans can be said to be “smart”; maybe “Elon Musk” smart, though).[i] And I did so by highlighting two features of Machine Learning that are relevant to any discussion of these issues: that the processes involved in building the relevant mathematical models are underlain by the wrong kind of physical substrate for sentience; and that these processes basically calculate correlations between inputs and outputs – the patterns an algorithm finds within the dataset it is fed – and these are not the right sort of processing mechanisms for (human) sapience.
These were technical points, in a way, and as such their import need not be very extensive. In fact, last time around I also claimed that the whole question of whether AI can be sentient or sapient was probably moot to begin with. After all, when we talk about AI [sic] these days, what we are really talking about is, on the one hand, some (mathematical) models of the statistical distributions of various kinds of data (tokens, words, images, what have you), and on the other, and much more commonly, the computer programs that we actually use to interact with the models – for instance, conversational agents such as ChatGPT, which accesses a Large Language Model (LLM) in order to respond to the prompts of a given user. From the point of view of cognition, however, neither the representations (or symbols) nor the processes involved in any of the constructs AI practitioners usually mention – models, programs, algorithms – bear much resemblance to any of the properties we know about (human) cognition – or, indeed, about the brain, despite claims that the neural networks of Machine/Deep Learning mimic brain processes. Read more »