Namit Arora in Philosophy Now:
René Descartes held that science and math would one day explain everything in nature. Early AI researchers embraced Hobbes’ view that reasoning was calculating, Leibniz’s idea that all knowledge could be expressed as a set of primitives [basic ideas], and Kant’s belief that all concepts were rules. At the heart of Western rationalist metaphysics – which shares a remarkable continuity with ancient Greek and Christian metaphysics – lay Cartesian mind-body dualism. This became the dominant inspiration for early AI research. Early researchers pursued what is now known as ‘symbolic AI’. They assumed that our brain stored discrete thoughts, ideas, and memories at discrete points, and that information is ‘found’ rather than ‘evoked’ by humans. In other words, the brain was a repository of symbols and rules which mapped the external world into neural circuits. And so the problem of creating AI was thought to boil down to creating a gigantic knowledge base with efficient indexing, ie, a search engine extraordinaire. That is, the researchers thought that a machine could be made as smart as a human by storing context-free facts, and rules which would reduce the search time effectively. Marvin Minsky of MIT’s AI lab went as far as claiming that our common sense could be produced in machines by encoding ten million facts about objects and their functions.
It is one thing to feed millions of facts and rules into a computer, another to get it to recognize their significance and relevance. The ‘frame problem’, as this last problem is called, eventually became insurmountable for the ‘symbolic AI’ research paradigm. One critic, Hubert L. Dreyfus, expressed the problem thus: “If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated?” (‘Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian’).
GOFAI – Good Old Fashioned Artificial Intelligence – as symbolic AI came to be called, soon turned into what philosophers of science call a degenerative research program – reduced to reacting to new discoveries rather than making them. It is unsettling to think how many prominent scientists and philosophers held (and continue to hold), such naïve assumptions about how human minds operate. A few tried to understand what went wrong and looked for a new paradigm for AI.