Anil Ananthaswamy in Quanta:
Language isn’t always necessary. While it certainly helps in getting across certain ideas, some neuroscientists have argued that many forms of human thought and reasoning don’t require the medium of words and grammar. Sometimes, the argument goes, having to turn ideas into language actually slows down the thought process.
Now there’s intriguing evidence that certain artificial intelligence systems could also benefit from “thinking” independently of language.
When large language models (LLMs) process information, they do so in mathematical spaces, far from the world of words. That’s because LLMs are built using deep neural networks, which essentially transform one sequence of numbers into another — they’re effectively complicated math functions. Researchers call the numerical universe in which these calculations take place a latent space.
But these models must often leave the latent space for the much more constrained one of individual words.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.