Skip to content

Writing with the rear-view mirror

Social science research is supposed to work like this:
1) You want to explain a certain case or a class of phenomena;
2) You develop a theory and derive a set of hypotheses;
3) You test the hypotheses with data;
4) You conclude about the plausibility of the theory;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that mirrors the steps above.

But in practice, social science research often works like this:
1) You want to explain a certain case or a class of phenomena;
2) You test a number hypotheses with data;
3) You pick the hypotheses that matched the data best and combine them in a theory;
4) You conclude that this theory is plausible and relevant;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that does not reflect the steps above.

In short, an inductive quest for a plausible explanation is masked and reported as deductive theory-testing. This fallacy is both well-known and rather common (at least in the fields of political science and public administration). And, in my experience, it turns out to be tacitly supported by the policies of some journals and reviewers.

For one of my previous research projects, I studied the relationship between public support and policy output in the EU. Since the state of the economy can influence both, I included levels of unemployment as a potential omitted variable in the empirical analysis. It turned out that lagged unemployment is positively related to the volume of policy output. In the paper, I mentioned this result in passing but didn’t really discuss it at length because 1) the original relationship between public support and policy output was not affected, and 2) although highly statistically significant, the result was quite puzzling.

When I submitted the paper at a leading political science journal, a large part of the reviewers’ critiques focused on the fact that I do not have an explanation for the link between unemployment and policy output in the paper. But why should I? I did not have a good explanation why these variables should be related (with a precisely 4-year lag) when I did the empirical analysis, so why pretend? Of course, I suspected unemployment as a confounding variable for the original relationship I wanted to study, so I took the pains of collecting the data and doing the tests, still that certainly doesn’t count as an explanation for the observed statistical relationship between unemployment and policy output. But the point is, it would have been entirely possible to write the paper as if I had strong ex ante theoretical reasons to expect that rising unemployment increases the policy output of the EU, and that the empirical test supports (or more precisely, does not reject) this hypothesis. That would certainly have greased the review process, and it only takes moving a few paragraphs from the concluding section to the theory part of the paper. So, if your data has a surprising story to tell, make sure it looks like you anticipated it all along – you even had a theory that predicted it! This is what I call ‘writing with the rear-view mirror’.

Why is it a problem? After all, an empirical association is an empirical association no matter whether you theorized about it beforehand or not. So where is the harm? As I see it, by pretending to have theoretically anticipated an empirical association, you grant it undue credence. Not only is data consistent with a link between two variables, but there are strong theoretical grounds to believe the link should be there. A surprising statistical association, however robust, is just what it is – a surprising statistical association that possibly deserves speculation, exploration and further research. On the other hand, a robust statistical association ‘predicted’ by a previously-developed theory is way more – it is a claim that we understand how the world works.

Until journals and reviewers act as if proper science never deviates from the hypothetico-deductive canon, writers will pretend that they follow it. While openly descriptive and exploratory research is frowned upon, sham theory-testing will prevail.

Eventually, my paper on the links between public support, unemployment and policy output in the EU got accepted (in a different journal). Surprisingly given the bumpy review process, it has just been selected as the best article published in that journal during 2011. Needless to say, an explanation why unemployment might be related to EU policy output is still wanting.

Published inAcademic publishingObservational studiesPublic opinion

Be First to Comment

Leave a Reply

Your email address will not be published.