Sean Carroll in Preposterous Universe:
Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs “learn” is very different from how human beings learn, and the same can be said for how they “reason.” It’s reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they’re still shockingly stupid.
More here.