Why Minds Are Not Like Computers

Ari Sculman in New Atlantis:

Since the inception of the AI project, the use of computer analogies to try to describe, understand, and replicate mental processes has led to their widespread abuse. Typically, an exponent of AI will not just use a computer metaphor to describe the mind, but will also assert that such a description is a sufficient understanding of the mind—indeed, that mental processes can be understood entirely in computational terms. One of the most pervasive abuses has been the purely functional description of mental processes. In the black box view of programming, the internal processes that give rise to a behavior are irrelevant; only a full knowledge of the input-output behavior is necessary to completely understand a module. Because humans have “input” in the form of the senses, and “output” in the form of speech and actions, it has become an AI creed that a convincing mimicry of human input-output behavior amounts to actually achieving true human qualities in computers.

The embrace of input-output mimicry as a standard traces back to Alan Turing’s famous “imitation game,” in which a computer program engages in a text-based conversation with a human interrogator, attempting to fool the person into believing that it, too, is human. The game, now popularly known as the Turing Test, is above all a statement of epistemological limitation—an admission of the impossibility of knowing with certainty that any other being is thinking, and an acknowledgement that conversation is one of the most important ways to assess a person’s intelligence. Thus Turing said that a computer that passes the test would be regarded as thinking, not that it actually is thinking, or that passing the test constitutes thinking. In fact, Turing specified at the outset that he devised the test because the “question ‛Can machines think?’ I believe to be too meaningless to deserve discussion.” But it is precisely this claim—that passing the Turing Test constitutes thinking—that has become not just a primary standard of success for artificial intelligence research, but a philosophical precept of the project itself.

This precept is based on a crucial misunderstanding of why computers work the way they do.