Gary Marcus in his Substack newsletter:
Every now and then engineers make an advance, and scientists and lay people begin to ponder the question of whether that advance might yield important insight into the human mind. Descartes wondered whether the mind might work on hydraulic principles; throughout the second half of the 20th century, many wondered whether the digital computer would offer a natural metaphor for the mind.
The latest hypothesis to attract notice, both within the scientific community, and in the world at large, is the notion that a technology that is popular today, known as large language models, such as OpenAI’s GPT-3, might offer important insight into the mechanics of the human mind. Enthusiasm for such models has grown rapidly; OpenAI’s Chief Science Officer Ilya Sutskever recently suggested that such systems could conceivably be “slightly conscious”. Others have begun to compare GPT with the human mind.
That GPT-3, an instance of a kind of technology known as a “neural network”, and powered by a technique known as deep learning, appears clever is beyond question – but aside from their possible merits as engineering tools, one can ask another question: are large language models a good model of human language?
More here.