Skip to content

Category: Causality

What are the effects of COVID-19 on mortality? Individual-level causes of death and population-level estimates of casual impact

Introduction How many people have died from COVID-19? What is the impact of COVID-19 on mortality in a population? Can we use excess mortality to estimate the effects of COVID-19? In this text I will explain why the answer to the first two questions need not be the same. That is, the sum of cases where COVID-19 has been determined to be the direct[1] cause of death need not be the same as the population-level estimate about the causal impact of COVID-19. When measurement of the individual-level causes of death is imperfect, using excess mortality (observed minus expected) to measure the impact of COVID-19 leads to an underestimate of the number of individual cases where COVID-19 has been the direct cause of death. Assumptions The major assumption on which the argument rests is that some of the people who have died from COVID-19 would have died from other causes, within a specified relatively short time-frame (say, within the month). It seems very reasonable to assume that at least some of the victims of COVID-19 would have succumbed to other causes of death. This is especially easy to imagine given that COVID-19 kills disproportionally the very old and that the ultimate causes of death that it provokes – respiratory problems, lungs failure, etc. – are shared with other common diseases with high mortality among the older population, such as the flu. Defining individual and population-level causal effects With this crucial assumption in mind, we can construct the following simple table. Cell…

More on QCA solution types and causal analysis

Following up my post on QCA solution types and their appropriateness for causal analysis, Eva Thomann was kind enough to provide a reply. I am posting it here in its entirety : Why I still don’t prefer parsimonious solutions (Eva Thomann) Thank you very much, Dimiter, for issuing this blog debate and inviting me to reply. In your blog post, you outline why, absent counterevidence, you find it justified to reject applied Qualitative Comparative Analysis (QCA) paper submission that do not use the parsimonious solution. I think I agree with some but not all of your points. Let me start by clarifying a few things. Point of clarification 1: COMPASSS statement is about bad reviewer practice It´s good to see that we all seem to agree that “no single criterion in isolation should be used to reject manuscripts during anonymous peer review”. The reviewer practice addressed in the COMPASSS statement is a bad practice. Highlighting this bad reviewer practice is the sole purpose of this statement. Conversely, the COMPASSS statement does not take sides when it comes to preferring specific solution types over others. The statement also does not imply anything about the frequency of this reviewer practice – this part of your post is pure speculation.  Personally I have heard people complaining about getting papers rejected for promoting or using conservative (QCA-CS), intermediate (QCA-IS) and parsimonious solutions (QCA-PS) with about the same frequency. But it is of course impossible for COMPASSS to get a representative picture of this phenomenon. The…

QCA solution types and causal analysis

Qualitative Comparative Analysis (QCA) is a relative young research methodology that has been frequently under attack from all corners, often for the wrong reasons. But there is a significant controversy brewing up within the community of people using  set-theoretic methods (of which QCA is one example) as well. Recently, COMPASSS – a prominent network of scholars interested in QCA – issued a Statement on Rejecting Article Submissions because of QCA Solution Type. In this statement they ‘express the concern … about the practice of some anonymous reviewers to reject manuscripts during peer review for the sole, or primary, reason that the given study chooses one solution type over another’. The ‘solution type’ refers to the procedure used to minimize the ‘truth tables’ which collect the empirical data in QCA (and other set-theoretic) research when there are unobserved combinations of conditions (factors, variables) in the data. Essentially, in cases of missing data (which is practically always) together with the data minimization algorithm the solution type determines the inference you get from the data. I have not been involved in drawing up the statement (and I am not a member of COMPASSS), and I have not reviewed any articles using QCA recently, so I am not directly involved in this controversy on either side. At the same time, I have been interested in QCA and related methodologies for a while now, I have covered their basics in my textbook on research design, and I remain intrigued both by their promise and their…

Is interpretation descriptive or explanatory?

One defining feature of interpretivist approaches to social science is the idea that the goal of analysis is to provide interpretations of social reality rather than law-based explanations. But of course nobody these days believes in law-based causality in the social world anyways, so the question whether interpretation is to be understood as purely descriptive or as explanatory remains. Here is what I wrote about this issue for an introductory chapter on research design in political science. The paragraph, however, will need to be removed from the text to make the chapter shorter, so I post it here instead. I will be glad to see opinions from scholars who actually work with interpretivist methodologies: It is difficult to position interpretation (in the narrow sense of the type of work interpretivist political scientists engage in) between description and explanation. Clifford Geertz notes that (ethnographic) description is interpretive (Geertz 1973: 20), but that still leaves the question whether all interpretation is descriptive open. Bevir and Rhodes (2016) insist that intepretivists reject a ‘scientific concept of causation’, but suggest that we can explain actions as products of subjective reasons, meanings, and beliefs. In addition, intentionalist explanations are to be supported by ‘narrative explanations’. In my view, however, a ‘narrative’ that ‘explains’ by relating actions to beliefs situated in a historical context is conceptually and observationally indistinguishable from a ‘thick description’, and better regarded as such.

Correlation does not imply causation. Then what does it imply?

‘Correlation does not imply causation’ is an adage students from all social sciences are made to recite from a very early age. What is less often systematically discussed is what could be actually going on so that two phenomena are correlated but not causally related. Let’s try to make a list: 1) The correlation might be due to chance. T-tests and p-values are generally used to guard against this possibility. 1a) The correlation might be due to coincidence. This is essentially a variant of the previous point but with focus on time series. It is especially easy to mistake pure noise (randomness) for patterns (relationships) when one looks at two variables over time. If you look at the numerous ‘correlation is not causation’ jokes and cartoons on the internet, you will note that most concern the spurious correlation between two variables over time (e.g. number of pirates and global warming): it is just easier to find such examples in time series than in cross-sectional data. 1b) Another reason to distrust correlations is the so-called ‘ecological inference‘ problem. The problem arises when data is available at several levels of observation (e.g. people nested in municipalities nested in states). Correlation of two variables aggregated at a higher level (e.g. states) cannot be used to imply correlation of these variables at the lower (e.g. people). Hence, the higher-level correlation is a statistical artifact, although not necessarily due to mistaking ‘noise’ for ‘signal’. 2) The correlation might be due to a third variable being causally related to the two correlated variables we observe. This is the well-known omitted…