Uri Bram in Nautilus:
The 1980s at the MIT Computer Science and Artificial Intelligence Laboratory seemed to outsiders like a golden age, but inside, David Chapman could already see that winter was coming. As a member of the lab, Chapman was the first researcher to apply the mathematics of computational complexity theory to robot planning and to show mathematically that there could be no feasible, general method of enabling AIs to plan for all contingencies. He concluded that while human-level AI might be possible in principle, none of the available approaches had much hope of achieving it.
In 1990, Chapman wrote a widely circulated research proposal suggesting that researchers take a fresh approach and attempt a different kind of challenge: teaching a robot how to dance. Dancing, wrote Chapman, was an important model because “there’s no goal to be achieved. You can’t win or lose. It’s not a problem to be solved…. Dancing is paradigmatically a process of interaction.” Dancing robots would require a sharp change in practical priorities for AI researchers, whose techniques were built around tasks, like chess, with a rigid structure and unambiguous end goals. The difficulty of creating dancing robots would also require an even deeper change in our assumptions about what characterizes intelligence.
Chapman now writes about the practical implications of philosophy and cognitive science. In a recent conversation with Nautilus, he spoke about the importance of imitation and apprenticeship, the limits of formal rationality, and why robots aren’t making your breakfast.
More here.