Skip to content

RE-DESIGN Posts

Writing with the rear-view mirror

Social science research is supposed to work like this: 1) You want to explain a certain case or a class of phenomena; 2) You develop a theory and derive a set of hypotheses; 3) You test the hypotheses with data; 4) You conclude about the plausibility of the theory; 5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that mirrors the steps above. But in practice, social science research often works like this: 1) You want to explain a certain case or a class of phenomena; 2) You test a number hypotheses with data; 3) You pick the hypotheses that matched the data best and combine them in a theory; 4) You conclude that this theory is plausible and relevant; 5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that does not reflect the steps above. In short, an inductive quest for a plausible explanation is masked and reported as deductive theory-testing. This fallacy is both well-known and rather common (at least in the fields of political science and public administration). And, in my experience, it turns out to be tacitly supported by the policies of some journals and reviewers. For one of my previous research projects, I studied the relationship between public support and policy output in the EU. Since the state of the economy can influence both, I included levels of unemployment as a potential omitted variable in the empirical analysis. It turned out that lagged unemployment is positively related to the volume of policy output. In the paper, I mentioned this result in passing…

When ‘just looking’ beats regression

In a draft paper currently under review I argue that the institutionalization of a common EU asylum policy has not led to a race to the bottom with respect to asylum applications, refugee status grants, and some other indicators. The graph below traces the number of asylum applications lodged in 29 European countries since 1997: My conclusion is that there is no evidence in support of the theoretical expectation of a race to the bottom (an ever-declining rate of registered applications). One of the reviewers insists that I use a regression model to quantify the change and to estimate the uncertainly of the conclusion. While in general I couldn’t agree more that being open about the uncertainty of your inferences is a fundamental part of scientific practice, in this particular case I refused to fit a regression model and calculate standards errors or confidence intervals. Why? In my opinion, just looking at the graph is convincing that there is no race to the bottom – applications rates have been down and then up again while the institutionalization of a common EU policy has only strengthened over the last decade. Calculating standard errors will be superficial because it is hard to think about the yearly averages as samples from some underlying population. Estimating a regression which would quantify the EU effect would only work if the model is sufficiently good to capture the fundamental dynamics of asylum applications before isolating the EU effect, and there is no such model. But most importantly, I just didn’t feel…

Climate science wars: The Lysenko move

The Wall Street Journal (WSJ) has published a letter entitled ‘No need to panic about global warming’ which opines that “There is no compelling scientific argument for drastic action to “decarbonize” the world’s economy” and “Alarmism also offers an excuse for governments to raise taxes, taxpayer-funded subsidies for businesses that understand how to work the political system, and a lure for big donations to charitable foundations promising to save the planet.” The letter is signed by 2 meteorologists and 14 other scientists. Here is another letter signed by 255 scientists making the case for human-induced global warming and published in Science in 2010. This letter allegedly has been submitted to, but rejected by the WSJ. Boing Boing claims that it has also been drafted in response to the WSJ piece but this is obviously wrong since the Science letter has been around since 2010. The 16 scientists who got their letter published by the WSJ accuse the “international warming establishment ” of pushing science into ‘the frightening period when Trofim Lysenko hijacked biology in the Soviet Union’. Wow! This rhetorical trick is so incredibly audacious that it deserves to be immortalized as the “Lysenko move” and included into the standard weaponry for academic spats, next to reductio ad absurdum and skeletons in the closet. Who is gonna be the next victim?

Unit of analysis vs. Unit of observation

Having graded another batch of 40 student research proposals, the distinction between ‘unit of analysis’ and ‘unit of observation’ proves to be, yet again, one of the trickiest for the students to master. After several years of experience, I think I have a good grasp of the difference between the two, but it obviously remains a challenge to explain it to students. King, Keohane and Verba (1994) [KKV] introduce the difference in the context of descriptive inference where it serves the argument that what often goes under the heading of a ‘case study’ often actually has many observations (p.52, see also 116-117). But, admittedly the book is somewhat unclear about the distinction and unambiguous definitions are not provided. In my understanding, the unit of analysis (a case) is at the level at which you pitch the conclusions. The unit of observation is at the level at which you collect the data. So, the unit of observation and the unit of analysis can be the same but they need not be. In the context of quantitative research, units of observation could be students and units of analysis classes, if classes are compared. Or students can be both the units of observation and analysis if students are compared. Or students can be the units of analyses and grades the unit of observations if several observations (grades) are available per student. So it all depends on the design. Simply put, the unit of observation is the row in the data table but the unit of analysis can…

Myron’s Law

“Asymptotically, any finite tax code collects zero revenue” This is what economist Paul Romer calls Myron’s Law (after Myron Scholes). It is a great aphorism as it illuminates a neglected source of institutional change – the opportunistic adaptation of the regulated actors to the rules which spurs transformations of the rules which leads to behavioral adaptation, ad infinitum. Usually, institutions are regarded as sticky, slow-moving and resilient to change. Which might be true for constitutions, but is easily disqualified when once looks beyond the fundamental ‘rules of the game’ into tax codes, the network of public organizations, the details of electoral systems and other lower-level institutions. Few seem to realize that these change all the time, and we have precious little theory to explain why. In fact, the degree of change is such that one wonders whether tax codes and the like qualify as institutions at all. In a way, they are like the proverbial river – always changing yet somehow remaining the same.

Hyperlinks

Migration and unemployment File under ‘correlation is not causation’. And ‘endogeneity’. And ‘instrumental variables that do not make sense’. Equitable decision making has intrinsic value Apparently,there is a region in the brain [anterior insula] ‘linked to the experience of subjective disutility’. Ah, the prospects for utility maximization! Fukuyama on European identities Surfing on the obvious A post on the philosophy of explanation at Understanding Society