Stephen Ornes in Quanta:
By now, people treat neural networks as a kind of AI panacea, capable of solving tech challenges that can be restated as a problem of pattern recognition. They provide natural-sounding language translation. Photo apps use them to recognize and categorize recurrent faces in your collection. And programs driven by neural nets have defeated the world’s best players at games including Go and chess.
However, neural networks have always lagged in one conspicuous area: solving difficult symbolic math problems. These include the hallmarks of calculus courses, like integrals or ordinary differential equations. The hurdles arise from the nature of mathematics itself, which demands precise solutions. Neural nets instead tend to excel at probability. They learn to recognize patterns — which Spanish translation sounds best, or what your face looks like — and can generate new ones.
The situation changed late last year when Guillaume Lample and François Charton, a pair of computer scientists working in Facebook’s AI research group in Paris, unveiled a successful first approach to solving symbolic math problems with neural networks.
More here.