Fighting Poverty Effectively

A few years ago, I posted on the Poverty Action Lab and its approach to development: randomized trials to see what works and what doesn’t. The new issue of the Boston Review is out, and in its New Democracy Forum, Abhijit Vinayak Banerjee, the Lab’s director, has a piece on sensible developmental aid policies.

It has been established that figuring out what works is not easy—a large body of literature documents the pitfalls of the intuitive approach to program evaluation. When we do something and things get better, it is tempting to think that it was because of what we did. But we have no way of knowing what would have happened in the absence of the intervention. For example, a study of schools in western Kenya by Paul Glewwe, Michael Kremer, Sylvie Moulin and Eric Zitzewitz compared the performance of children in schools that used flip charts for teaching science and schools that did not and found that the former group did significantly better in the sciences even after controlling for all other measurable factors. An intuitive assessment might have readily ascribed the difference to the educational advantages of using flip charts, but these researchers wondered why some schools had flip charts when a large majority did not. Perhaps the parents of children attending these schools were particularly motivated and this motivation led independently both to the investment in the flip charts and, more significantly, to the goading of their children to do their homework. Perhaps these schools would have done better even if there were no such things as flip charts.

Glewwe and company therefore undertook a randomized experiment: 178 schools in the same area were sorted alphabetically, first by geographic district, then by geographic division, and then by school name. Then every other school on that list was assigned to be a flip-chart school. This was essentially a lottery, which guaranteed that there were no systematic differences between the two sets of schools. If we were to see a difference between the sets of schools, we could be confident that it was the effect of the flip charts. Unfortunately, the researchers found no difference between the schools that won the flip-chart lottery and the ones that lost.

Randomized trials like these—that is, trials in which the intervention is assigned randomly—are the simplest and best way of assessing the impact of a program.

Also see the comments by Ian Goldin, F. Halsey Rogers, and Nicholas Stern; Mick Moore; Ian Vásquez; Angus Deaton; Alice H. Amsden; Robert H. Bates; Carlos Barbery, Howard White, Jagdish Bhagwati, Raymond C. Offenheiser and Didier Jacobs, and Ruth Levine.