Psychology and neuroscience crack open AI large language models

Matthew Hutson in Nature:

The latest wave of AI relies heavily on machine learning, in which software identifies patterns in data on its own, without being given any predetermined rules as to how to organize or classify the information. These patterns can be inscrutable to humans. The most advanced machine-learning systems use neural networks: software inspired by the architecture of the brain. They simulate layers of neurons, which transform information as it passes from layer to layer. As in human brains, these networks strengthen and weaken neural connections as they learn, but it’s hard to see why certain connections are affected. As a result, researchers often talk about AI as ‘black boxes’, the inner workings of which are a mystery.

In the face of this difficulty, researchers have turned to the field of explainable AI (XAI), expanding its inventory of tricks and tools to help reverse-engineer AI systems.

More here.