Artificial neural networks are computational models inspired by the organization of neurons in the brain. They are used to model and analyze data, to implement algorithms and in attempts to understand the computational principles used by the brain. The popularity of neural networks in computer science, machine learning and cognitive science has varied wildly, both across time and between people. To an enthusiast, neural networks are seen as a revolutionary way of conceiving of computation; the entry point to robust, distributed, easily parallelizable processing; the means to build artificial intelligence systems that replicate the complexity of the brain; and a way to understand the computations that the brain carries out. To skeptics they are poorly understood and over-hyped, offering little insight into general computational principles either in computer science or in cognition. Neural networks are often called “the second best solution to any problem”. Depending on where you stand, this either means that they are often promising but never actually useful or that they are applicable to a range of problems and do almost as well as solutions explicitly tailored to the particular details of a problem (and only applicable to that particular problem).
Neural networks typically consist of a number of simple information processing units (the “neurons”). Each neuron combines a number of inputs (some or all of which come from other neurons) to give an output, which is then typically used as input to other neurons in the network. The connections between neurons normally have weights, which determine the strength of the effect of the neurons on each other. So, for example, a simple neuron could sum up all its inputs weighted by the connection strengths and give an output of 0 or 1 depending on whether this sum is below or above some threshold. This output then functions as an input to other neurons, with appropriate weights for each connection.
A computation involves transforming some stream of input into some stream of output. For example, the input stream might be a list of numbers that come into the network one by one, and the desired output stream might be the squares of those numbers. Some or all of the neurons receive the input through connections just like those between neurons. The output stream is taken to be the output of some particular set of neurons in the network. The network can be programmed to do a particular transformation (“trained”) by adjusting the strengths of connections between different neurons and between the inputs and the neurons. Typically this is done before the network is used to process the desired input, but sometimes the connection weights are changed according to some pre-determined rule as the network processes input.

The correct answer would be “it depends” or “compared to what”? After all, it’s not so much that everyone else in Eurasia stopped thinking 500 years ago, but rather than an explosion of knowledge occurred in Europe that rapidly outstripped other centers of civilization in Eurasia. And after a period of relative decline,