Is Education Worthless?

by Fabio Tollon

“How do you get a philosophy major away from your front door? You pay them for the pizza.”

As a doctoral candidate in philosophy people often ask me what I am going to “do” with my degree. That is, how will I get a job and be a good, productive little bourgeoisie worker. How will I contribute to society, and how will my degree (which of course was spent thinking about the meaning of “meaning”, whether reality is real, and how rigid designation works) benefit anybody. I have heard many variations on the theme of the apparent uselessness of philosophy. Now, I think philosophy has a great many uses, both in itself and pragmatically. Of concern here, however, is whether not just philosophy, but education in general might be (mostly) useless.

If you are like me, then you think education matters. Education is important, should be funded and encouraged, and it generally improves the well-being of individuals, communities, and countries. It is with this preconception that I went head-first into Bryan Caplan’s well written (and often wonderfully irreverent) The Case Against Education, where he argues that we waste trillions in taxpayer revenue when we throw it at our mostly inefficient education system. Caplan does not take issue with education as such, but rather the very specific form that education has taken in the 21st century. Who hasn’t sat bored in a class and wondered whether circle geometry would have any bearing on one’s employability?

As the title suggests, this is not a book that is kind in its assessment of the current state of education. While standard theory in labour economics argues that education has large positive effects on human capital, Caplan claims that its effect is meagre. In contrast to “human capital purists”, Caplan argues that the function of education is to signal three things: intelligence, conscientiousness, and conformity. Education does not develop students’ skills to a great degree, but rather seeks to magnify their ability to signal the aforementioned traits to potential employers effectively. Read more »



Monday, March 13, 2017

Artificial Stupidity

by Ali Minai

"My colleagues, they study artificial intelligence; me, I study natural stupidity." —Amos Tversky, (quoted in “The Undoing Project” by Michael Lewis).

Humans-vs-AINot only is this quote by Tversky amusing, it also offers profound insight into the nature of intelligence – real and artificial. Most of us working on artificial intelligence (AI) take it for granted that the goal is to build machines that can reason better, integrate more data, and make more rational decisions. What the work of Daniel Kahneman and Amos Tversky shows is that this is not how people (and other animals) function. If the goal in artificial intelligence is to replicate human capabilities, it may be impossible to build intelligent machines without "natural stupidity". Unfortunately, this is something that the burgeoning field of AI has almost completely lost sight of, with the result that AI is in danger of repeating the same mistakes in the matter of building intelligent machines as classical economists have made in their understanding of human behavior. If this does not change, homo artificialis may well end up being about as realistic as homo economicus.

The work of Tversky and Kahneman focused on showing systematically that much of intelligence is not rational. People don’t make all decisions and inferences by mathematically or logically correct calculation. Rather, they are made based on rules of thumb – or heuristics – driven not by analysis but by values grounded in instinct, intuition and emotion: Kludgy short-cuts that are often “wrong” or sub-optimal, but usually “good enough”. The question is why this should be the case, and whether it is a “bug” or a “feature”. As with everything else about living systems, Dobzhansky’s brilliant insight provides the answer: This too makes sense only in the light of evolution.

The field of AI began with the conceit that, ultimately, everything is computation, and that reproducing intelligence – even life itself – was only a matter of finding the “correct” algorithms. As six decades of relative failure have demonstrated, this hypothesis may be true in an abstract formal sense, but is insufficient to support a practical path to truly general AI. To elaborate Feynman, Nature’s imagination has turned out to be much greater than that of professors and their graduate students. The antidote to this algorithm-centered view of AI comes from the notion of embodiment, which sees mental phenomena – including intelligence and behavior – as emerging from the physical structures and processes of the animal, much as rotation emerges from a pinwheel when it faces a breeze. From this viewpoint, the algorithms of intelligence are better seen, not as abstract procedures, but as concrete dynamical responses inherent in the way the structures of the organism – from the level of muscles and joints down to molecules – interact with the environment in which they are embedded.

Read more »