Steve Nadis in Quanta:
“Neural networks are currently the most powerful tools in artificial intelligence,” said Sebastian Wetzel(opens a new tab), a researcher at the Perimeter Institute for Theoretical Physics. “When we scale them up to larger data sets, nothing can compete.”
And yet, all this time, neural networks have had a disadvantage. The basic building block of many of today’s successful networks is known as a multilayer perceptron, or MLP. But despite a string of successes, humans just can’t understand how networks built on these MLPs arrive at their conclusions, or whether there may be some underlying principle that explains those results. The amazing feats that neural networks perform, like those of a magician, are kept secret, hidden behind what’s commonly called a black box.
AI researchers have long wondered if it’s possible for a different kind of network to deliver similarly reliable results in a more transparent way.
An April 2024 study(opens a new tab) introduced an alternative neural network design, called a Kolmogorov-Arnold network (KAN), that is more transparent yet can also do almost everything a regular neural network can for a certain class of problems.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.