by David J. Lobina
Artificial General Intelligence, however this concept is to be defined exactly, is upon us, say two prominent AI experts. Not exactly an original statement, as this sort of claim has come up multiple times in the last year or so, often followed by various qualifications and the inevitable dismissals (Gary Marcus has already pointed out that this last iteration involves not a little post-shifting, and it doesn’t really stand up to scrutiny, anyway).
I’m very sceptical too, for the simple reason that modern Machine/Deep Learning models are huge correlation machines and that’s not the sort of process that underlies whatever we might want to call an intelligent system. It is certainly not the way we know humans “think”, and the point carries yet more force when it comes to Language Models, those guess-next-token-based-on-statistical-distribution-of-huge-amounts-of-data systems.[1]
This is not to say that a clear definition of intelligence is in place, but we are on firmer ground when discussing what sort of abilities and mental representations are involved when a person has a thought or engages in some thinking. I would argue, in fact, that the account some philosophers and cognitive scientists have put together over the last 40 or so years on this very question ought to be regarded as the yardstick against which any artificial system needs to be evaluated if we are to make sense of all these claims regarding the sapience of computers calculating huge numbers of correlations. That’s what I’ll do in this post, and in the following I shall show how most AI models out there happen to be pretty hopeless in this regard (there is a preview in the photo above). Read more »