Stephen M. Walt in Foreign Policy:
Why does so much of the academic writing on international affairs seem to be of little practical value, mired in a “cult of irrelevance”? Is it because IR scholars are pursuing a misleading model of “science,” patterned after physics, chemistry, or biology? Or is it because many prominent academics fear criticism and are deathly afraid of being controversial, and prefer to hide behind arcane vocabulary, abstruse mathematics, or incomprehensible postmodern jargon?
Both motivations are probably at work to some degree, but I would argue that academics are for the most part just responding to the prevailing incentive structures and metrics that are used to evaluate scholarly merit. This point is made abundantly clear in an important new article by Peter Campbell and Michael Desch of the University of Notre Dame, titled “Rank Irrelevance: How Academia Lost Its Way.” Campbell and Desch examine the methodology behind the National Research Council rankings of graduate programs in political science, and argue that the methods used are both “systematically biased” and analytically flawed.
National Research Council (NRC) rankings carry a fair bit of weight in academia. As I know from my own experience, deans, provosts, and presidents pay attention to where departments are ranked. A department chair who presides over a significant improvement in his/her department's ranking will be viewed favorably, while a decline sets off warning bells. Similarly, if a junior faculty member is up for tenure and gets an “outside offer” from a more highly ranked department, that will be taken as a strong signal of that faculty member's perceived value. By contrast, if you're up for tenure and get an offer from a department ranked further down the food chain, it will be a positive sign but not necessarily dispositive. For these and other reasons, these rankings matter.
The problem, as Campbell and Desch show, is that the rankings are seriously flawed.
More here.