The failure of political science

Last week the American Senate supported with a clear bi-partisan majority a decision to stop funding for political science research from the National Science Foundation. Of all disciplines, only political science has been singled out for the cuts and the money will go for cancer research instead.

The decision is obviously wrong for so many reasons but my point is different. How could political scientists who are supposed to understand better than anyone else how politics works allow this to happen? What does it tell us about the state of the discipline that the academic experts in political analysis cannot prevent overt political action that hurts them directly and rather severely?

To me, this failure of American political scientists to protect their own turf in the political game is scandalous. It is as bad as Nobel-winning economists Robert Merton and Myron Scholes leading the hedge fund ‘Long Tern Capital Management‘ to bust and losing 4.6 billion dollars with the help of their Nobel-wining economic theories. As Myron & Scholes’ hedge fund story revels the true real-world value of (much) financial economics theories, so does the humiliation of political science by the Congress reveal the true real-world value of (much) political theories.

Think about it –  the world-leading academic specialists on collective action, interest representation and mobilization could not get themselves mobilized, organized and represented in Washington to protect their funding. The professors of the political process and legislative institutions could not find a way to work these same institutions to their own advantage. The experts on political preferences and incentives did not see the broad bi-partisan coalition against political science forming. That’s embarrassing

It is even more embarrassing because American political science is the most productive, innovative, and competitive in the world. There is no doubt that almost all of the best new ideas, methods, and theories in political science over the last 50 years have come from the US. (And a lot of these innovations have been made possible because of the funding received by the National Science Foundation). So it is not that individual American political scientists are not smart – of course they are, but for some reason as a collective body they have not been able to benefit from their own knowledge and insights. Or that knowledge and insights about US politics are deficient in important ways.The fact remains, political scientists were beaten in what should have been their own game. Hopefully some kind of lesson will emerge from all that…

P.S. No reason for public administration, sociology and other related disciplines to be smug about pol sci’s humiliation – they have been saved (for now) mostly by their own irrelevance. 

In defense of description

John Gerring has a new article in the British Journal of Political Science [ungated here]which attempts to restore description to its rightful place as a respectful occupation for political scientists. Description has indeed been relegated to the sidelines at the expense of causal inference during the last 50 years, and Gerring does a great job in explaining why this is wrong. But he also points out why description is inherently more difficult than causal analysis: 

‘Descriptive inference, by contrast, is centred on a judgment about what is important, substantively speaking, and how to describe it. To describe something is to assert its ultimate value. Not surprisingly, judgments about matters of substantive rationality are usually more contested than judgments about matters of instrumental rationality, and this offers an important clue to the predicament of descriptive inference.’ (p.740)

Required reading.

Writing with the rear-view mirror

Social science research is supposed to work like this:
1) You want to explain a certain case or a class of phenomena;
2) You develop a theory and derive a set of hypotheses;
3) You test the hypotheses with data;
4) You conclude about the plausibility of the theory;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that mirrors the steps above.

But in practice, social science research often works like this:
1) You want to explain a certain case or a class of phenomena;
2) You test a number hypotheses with data;
3) You pick the hypotheses that matched the data best and combine them in a theory;
4) You conclude that this theory is plausible and relevant;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that does not reflect the steps above.

In short, an inductive quest for a plausible explanation is masked and reported as deductive theory-testing. This fallacy is both well-known and rather common (at least in the fields of political science and public administration). And, in my experience, it turns out to be tacitly supported by the policies of some journals and reviewers.

For one of my previous research projects, I studied the relationship between public support and policy output in the EU. Since the state of the economy can influence both, I included levels of unemployment as a potential omitted variable in the empirical analysis. It turned out that lagged unemployment is positively related to the volume of policy output. In the paper, I mentioned this result in passing but didn’t really discuss it at length because 1) the original relationship between public support and policy output was not affected, and 2) although highly statistically significant, the result was quite puzzling.

When I submitted the paper at a leading political science journal, a large part of the reviewers’ critiques focused on the fact that I do not have an explanation for the link between unemployment and policy output in the paper. But why should I? I did not have a good explanation why these variables should be related (with a precisely 4-year lag) when I did the empirical analysis, so why pretend? Of course, I suspected unemployment as a confounding variable for the original relationship I wanted to study, so I took the pains of collecting the data and doing the tests, still that certainly doesn’t count as an explanation for the observed statistical relationship between unemployment and policy output. But the point is, it would have been entirely possible to write the paper as if I had strong ex ante theoretical reasons to expect that rising unemployment increases the policy output of the EU, and that the empirical test supports (or more precisely, does not reject) this hypothesis. That would certainly have greased the review process, and it only takes moving a few paragraphs from the concluding section to the theory part of the paper. So, if your data has a surprising story to tell, make sure it looks like you anticipated it all along – you even had a theory that predicted it! This is what I call ‘writing with the rear-view mirror’.

Why is it a problem? After all, an empirical association is an empirical association no matter whether you theorized about it beforehand or not. So where is the harm? As I see it, by pretending to have theoretically anticipated an empirical association, you grant it undue credence. Not only is data consistent with a link between two variables, but there are strong theoretical grounds to believe the link should be there. A surprising statistical association, however robust, is just what it is – a surprising statistical association that possibly deserves speculation, exploration and further research. On the other hand, a robust statistical association ‘predicted’ by a previously-developed theory is way more – it is a claim that we understand how the world works.

Until journals and reviewers act as if proper science never deviates from the hypothetico-deductive canon, writers will pretend that they follow it. While openly descriptive and exploratory research is frowned upon, sham theory-testing will prevail.

Eventually, my paper on the links between public support, unemployment and policy output in the EU got accepted (in a different journal). Surprisingly given the bumpy review process, it has just been selected as the best article published in that journal during 2011. Needless to say, an explanation why unemployment might be related to EU policy output is still wanting.

The ‘Nobel’ prize for Economics, VAR and Political Science

Yesterday the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel  was awarded to the economists Thomas J. Sargent and Christopher A. Sims “for their empirical research on cause and effect in the macroeconomy” (press-release here, Tyler Cowen presented the laureates here and here). The award for Christopher Sims in particular comes for the development of vector autoregression  – a method for analyzing ‘how the economy is affected by temporary changes in economic policy and other factors’. In fact, the application of vector autoregression (VAR) is not confined to economics and can be used for the analysis of any dynamic relationships.

Unfortunately, despite being developed back in the 1970s, VAR remains somewhat unpopular in political science and public administration (as I learned the hard way trying to publish an analysis that uses VAR to explore the relationship between public opinion and policy output in the EU over time). A quick-and-dirty search for ‘VAR’/’vector autoregression’ in Web of Science [1980-2011] returns 1810 hits under the category Economics and only 52 under Political Science (of which 23 are also filed under Economics). This is the distribution over the last decades:

Time period – Econ/ PolSci
1980-1989 –   13/1
1990-1999 – 406/15
2000-2011 – 1391/36

With all the disclaimers that go with using Web of Science as a data source, the discrepancy is clear.

It remains to be seen whether the Nobel prize for Sims will serve to popularize VAR outside the field of economics.

Rules of Reason, Reasons of Rules

This blog is about the uses of abuses of research on public policy and administration.

It is about the Rules of Reason – the rules that guide the production of social science and that structure the design of academic research.

But it is also about the Reasons of Rules – the reasonableness of social rules and the (ir)rationality of public institutions.