By Namit Arora
(A slightly modified version of this article appeared in Philosophy Now, Nov 2011. Here is the PDF.)
As a graduate student of computer engineering in the early 90s, I recall impassioned late night debates on whether machines can ever be intelligent—intelligent, as in mimicking the cognition, common sense, and problem-solving skills of ordinary humans. Scientists and bearded philosophers spoke of ‘humanoid robots.’ Neural network research was hot and one of my professors was a star in the field. A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.
I argued out of intuition, from a sense of the immersive nature of our life: how much we subconsciously acquire and call upon to get through life; how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted Turing test? How could a machine that did not care about its existence as humans do, ever behave as humans do? Can a machine become socially and emotionally intelligent like us without viscerally knowing infatuation, joy, loss, suffering, the fear of death and disease? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.
My interlocutors countered that while extremely complex, the human brain is clearly an instance of matter, amenable to the laws of physics. They posited a reductionist and computational approach to the brain that many, including Steven Pinker and Daniel Dennett, continue to champion today. Our intelligence, and everything else that informed our being in the world, had to be somehow ‘coded’ in our brain’s circuitry, including the great many symbols, rules, and associations we relied on to get through a typical day. Was there any reason why we couldn’t ‘decode’ this, and reproduce intelligence in a machine some day? Couldn’t a future supercomputer mimic our entire neural circuitry and be as smart as us? Recently, Dennett declared in his sonorous voice, “We are robots made of robots made of robots made of robots.”
Today’s supercomputers are ten million times faster than those of the early 90s. But despite the big advances in computing, AI has fallen woefully short of its ambition and hype. Instead, we have “expert” systems that process predetermined inputs in specific domains, perform pattern matching and database lookups, and algorithmically learn to adapt their outputs. Examples include chess software, search engines, speech recognition, industrial and service robots, and traffic and weather forecasting systems. Machines have done well with a great many tasks that we ourselves can, or already do, pursue algorithmically—including many yet unbeknown to us—as in searching for the word “ersatz” in an essay, making cappuccino, restacking books in a library, navigating our car in a city, or landing a plane. But so much else that defines our intelligence remains well beyond machines—such as projecting our creativity and imagination to understand new contexts and their significance, or figuring out how and why new sensory stimuli are relevant or not. Why is AI in such a brain-dead state? Is there any hope for it? Let’s take a closer look.
Read more »