How Large Language Models Prove Chomsky Wrong

Steven Piantadosi at Slator:

Steven: It’s certainly true that for many or most of these models, their training consists of being able to predict the next token in language, right? So they see some string and then they’re asked what the next word is going to be in that string and when you ask them to answer a question like that, what they’re doing is predicting the language that would follow that question. So explain the fundamental theorem of arithmetic in the style of Donald Trump. They’re taking that text and then predicting word by word what the next likely word would be and that happens to be a description of the theorem in the style of Donald Trump. So I think it’s true that they’re working like that. I think where the interesting debate is, is what exactly does that mean, right? So how I think about it is that if you were doing a really good job of predicting upcoming linguistic material, what word was going to be said next? You’d actually have to have discovered quite a bit about the world and about language, right, the grammar. So if you think about these models as having lots of parameters and kind of configuring themselves in a way in order to predict language well, probably what they’re doing is actually configuring themselves to represent some facts about the world and some facts about the dynamics of language, right? So, for example, if you gave it a prompt that said something like, you walk into a fancy Italian restaurant, what happens next, right? Well, it will just predict the next word. It’ll probably give you a plausible description of that scene, of what the next events are going to be.

more here.