Intuitions about case selection are often wrong

Imagine the following simple setup: there are two switches (X and Z) and a lamp (Y). Both switches and the lamp are ‘On’. You want to know what switch X does, but you have only one try to manipulate the switches. Which one would you choose to switch off: X, Z or it doesn’t matter?

These are the results of the quick Twitter poll I did on the question:

Clearly, almost half of the respondents think it doesn’t matter, switching X is the second choice, and only 2 out of 15 would switch Z to learn what X does. Yet, it is by pressing Z that we have the best chance of learning something about the effect of X. This seems quite counter-intuitive, so let me explain.

First, let’s clarify the assumptions embedded in the setup: (A1) both switches and the lamp can be either ‘On’ [1 ] or ‘Off’ [0]; (A2) the lamp is controlled only by these switches; there is nothing outside the system that controls its output; (A3) X and Z can work individually or in combination (so that the lamp is ‘On’ only if both switches are ‘On’ simultaneously).

Now let’s represent the information we have in a table:

Switch X Switch Z Lamp Y
1 1 1
0 0 0

We are allowed to make one experiment in the setup (press only one switch). In other words, we can add an observation for one more row of the table. Which one should it be?

Well, let’s see what happens if we switch off X (let’s call this strategy S1). There are two possible outcomes: either the lamp goes off (S1a) or it stays on (S1b).

In the first case (represented as the second line in the table below) we can conclude that X is not necessary for the lamp to be ‘On’, but we do not know whether X can switch on the lamp on its own (whether it is sufficient to do so).

Switch X Switch Z Lamp Y
1 1 1
0 1 1
0 0 0

If the lamp goes off when we press X, we know that X is necessary for the outcome but we do not know whether X can turn on the lamp on its own or only in combination with Z.

Switch X Switch Z Lamp Y
1 1 1
0 1 0
0 0 0

To sum up, by pressing X we learn either that (S1a) X is not necessary or that (S1b) X matters but we do not know whether on its own or only in combination with Z.

 

Now, let’s see what happens if we press Z (strategy S2). Again either the lamp stays on (S2a) or it goes off (S2b).

Under the first scenario, we learn that X is sufficient to turn on the lamp.

Switch X Switch Z Lamp Y
1 1 1
1 0 1
0 0 0

Under the second scenario, we learn that X is not sufficient to turn on the light. It is still possible that it is necessary for turning on the lamp in combination with Z.

Switch X Switch Z Lamp Y
1 1 1
1 0 0
0 0 0

To sum up, by pressing Z we learn either that (S2a) X can turn on the lamp or (S2b) that it cannot turn on the lamp on its own but is possibly necessary in combination with Z. 

Comparing the two sets of inferences, I think it is clear that the second one is much more informative. By pressing Z we learn either that we can turn on the lamp by pressing X or that we cannot unless Z is ‘On’. By pressing X we learn next to nothing: we are either still in the dark whether X works on its own to turn on the lamp (sorry for the pun) or that X matters but we still do now know whether we also need Z to be ‘On’.

If you are still unconvinced, the following table summarizes all inferences under all strategies and contingencies about each of the possible effects (X, Z, and the interaction XZ):

X works on its own Z works on its own Only XZ works Strategy
? True False S1a
? False ? S1b
True ? False S2a
False ? ? S2b

It should be obvious now that we are better off by pressing Z to learn about the effect of X.

Good, but what’s the relevance of this little game? Well, the game resembles a research design situation in which we have one observation (case), we have the resources to add only one more, and we have to select which observation to make. In other words, the game is about case selection.

For example, we observe a case with a rare outcome – say, successful regional integration. We suspect that two factors are at play, both of which are present in the case – say, high trade volume within the integrating block and democratic form of government for all units. And we wanna probe the effect of trade volume in particular. In that case, the analysis above suggests that we should choose a case that has the same volume of trade but a non-democratic form of government, rather than a case which has low volume of trade and democratic form of government.

This result is counter-intuitive, so let’s spell out why. First, note that we are interested in the effect of X (the effect of the switch and of trade volume) and not in explaining Y (how to turn on the lamp or how does regional integration come about). This is a subtle difference in interpretation, but one that is crucial for the analysis. Second, note that we are more interested in the effect of X than in the effect of Z, although both are potential causes of Y. If both X and Z are of equal interest, then obviously it doesn’t matter which one observation we make. Third, the result hinges on the assumption that there is nothing other than X or Z (or their interaction) that matters for Y. Once we admit other possible causal variables in the set-up, then we are no longer better off switching Z to learn the effect of X.

Sooooo, don’t take this little game as general advice on case selection. But it definitely shows that when it comes to research design our intuitions cannot always be trusted.

P.S. One assumption on which the analysis does not depend is binary effects and outcomes: it works equally well with probabilistic effects that are additive or multiplicative (involving an interaction). 

Learn more about research design.

More on QCA solution types and causal analysis

Following up my post on QCA solution types and their appropriateness for causal analysis, Eva Thomann was kind enough to provide a reply. I am posting it here in its entirety :

Why I still don’t prefer parsimonious solutions (Eva Thomann)

Thank you very much, Dimiter, for issuing this blog debate and inviting me to reply. In your blog post, you outline why, absent counterevidence, you find it justified to reject applied Qualitative Comparative Analysis (QCA) paper submission that do not use the parsimonious solution. I think I agree with some but not all of your points. Let me start by clarifying a few things.

Point of clarification 1: COMPASSS statement is about bad reviewer practice

It´s good to see that we all seem to agree that “no single criterion in isolation should be used to reject manuscripts during anonymous peer review”. The reviewer practice addressed in the COMPASSS statement is a bad practice. Highlighting this bad reviewer practice is the sole purpose of this statement. Conversely, the COMPASSS statement does not take sides when it comes to preferring specific solution types over others. The statement also does not imply anything about the frequency of this reviewer practice – this part of your post is pure speculation.  Personally I have heard people complaining about getting papers rejected for promoting or using conservative (QCA-CS), intermediate (QCA-IS) and parsimonious solutions (QCA-PS) with about the same frequency. But it is of course impossible for COMPASSS to get a representative picture of this phenomenon.

The term “empirically valid” refers to the, to my best knowledge entirely undisputed fact that all solution types are (at least) based on the information contained in the empirical data. The question that´s disputed is how we can or should go “beyond the facts” in causally valid ways when deriving QCA solutions.

Having said this, I will take off my “hat” as a member of the COMPASSS steering committee and contribute a few points to this debate. These points represent my own personal view and not that of COMPASSS or any of its bodies. I write as someone who uses QCA sometimes in her research and teaches it, too. Since I am not a methodologist, I won´t talk about fundamental issues of ontology and causality. I hope others will jump in on that.

Point of clarification 2: There is no point in personalizing this debate

In your comment you frequently refer to “the COMPASSS people”. But I find that pointless: COMPASSS hosts a broad variety of methodologists, users, practitioners, developers and teachers with different viewpoints and of different “colours and shapes”, some persons closer to “case-based” research, other closer to statistical/analytical research. Amongst others, Michael Baumgartner whom you mention is himself a members of the advisory board and he has had methodological debates with his co-authors as well.  Just because we can procedurally agree on a bad reviewer practice, it neither means we substantively agree on everything, nor does it imply that we disagree. History has amply shown how unproductive it can be for scientific progress when debates like these become personalized. Thus, if I could make a wish to you and everyone else engaging in this debate, it would be to talk about arguments rather than specific people. In what follows I will therefore refer to different approaches instead unless when referring to specific scholarly publications.

Point of clarification 3: There is more than one perspective on the validity of different solutions

As to your earlier point which you essentially repeat here, that “but if two solutions produce different causal recipes,  e.g. (1) AB-> E and (2) ABC-> E it cannot be that both (1) and (2) are valid”, my answer is: it depends on what you mean with “valid”.

It is common to look at QCA results as subset relations, here: statements of sufficiency. In a paper that is forthcoming in Sociological Methods & Research, Martino Maggetti and I call this the” approach emphasizing substantive interpretability”. From this perspective, the forward arrow “->2 reads “is sufficient for” and 1) in fact implies 2). Sufficiency means that X (here: AB) is a subset of Y (here: E). ABC is a subset of AB and hence it is also a subset of E, if AB is a subset of E. Logic dictates that any subset of a sufficient condition is also sufficient. Both are valid – they describe the sufficiency patterns in the data (and sometimes, some remainders) with different degrees of complexity.

Scholars promoting an “approach emphasizing redundancy-free models” agree with that, if we speak of mere (monotonic) subset relations. Yet they require QCA solutions to be minimal statements of causal relevance. From this perspective, the arrow (it then is <->, see below) reads “is causally relevant for” and if 1) then 2) cannot be true: 2) additionally grants causal relevance to C, but in in 1) we said only AB are causally relevant. As a causal statement, we can think of 2) claiming more than 1).

To proponents of the approach emphasizing substantive interpretability (and I am one of them), it all boils down to the question:

“Can something be incorrect that follows logically and inevitably from a correct statement?

Their brains shout:

“No, of course it can’t!

I am making an informed guess here: this fact is so blatantly obvious to most people well-versed in set theory that it does not require a formal reply.

For everyone else, it is important to understand that in order to follow the reasoning you are proposing in your comment, you have to buy into a whole set of assumptions that underlie the method promoted in the publication you are referring to (Baumgartner 2015), called Coincidence Analysis or CNA. Let me illustrate this.

Point of clarification 4: QCA is not CNA

In fact, one cannot accept 2) if 1) is true in the special case when the condition “AB” is  both minimally sufficient and contained in a minimally necessary condition for an outcome – which is also the situation you refer to (in your point 3). We have to replace the forward arrow “->” with “<->”.In such a situation, the X set and the Y set are equivalent. Of course, if AB and E are equivalent, then ABC and E are not equivalent at the same time. In reality, this – simultaneous necessity and sufficiency– is a rare scenario that requires a solution to be maximally parsimonious and having both a high consistency (indicating sufficiency) AND a very high coverage (indicating necessity).

But QCA – as opposed to CNA – is designed to assess necessary conditions and / or sufficient conditions. They don´t have to be both. As soon as we are speaking of a condition that is sufficient but not necessary (or not part of a necessary condition), then, if 1) is correct, 2) also has to be correct. You are acknowledging this when saying that “if A is sufficient for E, AB is also sufficient, for any arbitrary B”.

I will leave it to the methodologists to clarify whether it is ontologically desirable to empirically analyse sufficient but not necessary (or necessary but not sufficient) conditions. As a political scientist, I find it theoretically and empirically interesting. I believe this is in the tradition of much comparative political research. It is clear, and you seem to agree, that what we find to be correct entirely depends on how we define “correct” – there´s a danger of circularity here. At this point in time, it has to be pointed out that CNA is not QCA. Both are innovative, elegant and intriguing methods with their own pro’s and con’s. I am personally quite fascinated by CNA and would like to see more applications of it, but I am not convinced that we can or need to transfer its assumptions to QCA.

What I like about the recent publications advocating an approach emphasizing redundancy-free models is that they highlight that not all conditions contained in QCA solutions may be causally interpretable, if only we knew the true data-generating process (DGP). That points to the general question of causal arguments made with QCA if there is limited diversity, which has received ample scholarly attention for already quite a while.

Point of agreement 1: We need a cumulative process of rational critique

You argue that “the point about non-parsimonious solutions deriving faulty causal inferences seems settled, at least until there is a published response that rebukes it”. But QCA scholars have long highlighted issues of implausible and untenable counterfactuals entailed in parsimonious solutions (e.g. here, here, here, here, here, here and here). None of the published articles advocating redundancy-free models has so far made concrete attempts to rebuke these arguments. Following your line of reasoning, the points made by previous scholarship about parsimonious solutions deriving faulty causal inferences equally seems settled, at least until there is a published response that rebukes these points.

Indeed, advocates of redundancy-free models seem to either dismiss the relevance of counterfactuals altogether because CNA, so it is argued, does not rely on counterfactuals to derive solutions; OR they argue, that in the presence of limited diversity all solutions rely on counterfactuals. (Wouldn´t it be contradictory to argue both?). I personally would agree with the latter point. There can be no doubt that QCA (as opposed, perhaps, to CNA) is a set-theoretic, truth table based methods that, in the presence of limited diversity, involves counterfactuals. Newer algorithms (such as eQMC, used in the QCA package for R) no longer actively “rely on” remainders for minimization, and they exclude difficult and untenable counterfactuals rather than including tenable and “easy” counterfactuals. But the reason why QCA involves counterfactuals keeps being that intermediate and parsimonious QCA solutions involve configurations of conditions some of which are empirically observed, while others (the counterfactuals) are not. There can be only one conclusion: that the question of whether these counterfactuals are valid requires our keen attention.

Where does that leave us? To me, all that does certainly not mean that “the reliance on counterfactuals cannot be used to arbitrate this debate”. It means that different scholars have highlighted different issues relating to the validity of all solution types. None of these points have been conclusively rebuked so far. That, of course, leaves users in an intricate situation. They should not be punished for consistently and correctly following protocols proposed by methodologists of one or another approach.

Point of agreement 2: In the presence of limited diversity, QCA solutions can err in different directions

Parsimonious solutions are by no means unaffected by the problem that limited empirical diversity challenges our confidence in inferences. Indeed we should be careful not to omit that they err, too. As Ingo Rohlfing has pointed out, the question in which direction we want to err is a different question than the one which solution is correct. The answer to this former question probably depends.

Let us return to the above example and assume that we have a truth table characterized by limited diversity. We get a conservative solution

(CS) ABC -> E,

and a parsimonious solution

(PS) A -> E.

Let us further assume that we know (which in reality we never do) that the true DGP is

(DGP) AB -> E.

Neither CS nor PS give us the true DGP. To recap: To scholars emphasizing redundancy-free models, PS is “correct” because they define as “correct” a solution that does not contain causally irrelevant conditions. But note that PS here is also incomplete: the true result in this example is that, in order to observe the outcome E, A alone is not enough, it has to combine with B. Claiming that A alone is enough involves a counterfactual that could well be untenable. But the evidence alone does not allow us to conclude that B is irrelevant for E. It is usually only by making this type of oversimplifications that parsimonious solutions reach the high coverage values required to be “causally interpretable” under an approach emphasizing redundancy-free models.

To anyone with some basic training in QCA, this should raise some serious questions: But isn´t one of the core assumptions of QCA that we cannot interpret the single conditions in its results in isolation because they unfold their effect only in combination with other conditions? How, then, does QCA-PS fare when assessed against this assumption? I have not read a conclusive answer to this question yet.

Baumgartner and Thiem (2017) point out that with imperfect data, no method can be expected to deliver complete results. That may well be, but in QCA we deal with two types of completeness: complete AND-configurations, or including all substitutable paths or “causal recipes” combined with the logical OR. In order to interpret a QCA solution as a sufficient condition, I want to be reasonably sure that the respective AND-configuration in fact reliably implies the outcome (even if it omits other configurations that may not have been observed in my data).  Using this criterion, QCA-PS arguably fares worst (it most often misses out on causally relevant factors) and QCA-CS fares best (though it most often also still includes causally irrelevant factors).

To be sure, QCA-PS is sufficient for the outcome in the dataset under question. But I am unsure how I have to read it: “either X implies Y, or I did not observe X”? Or “X is causally relevant for Y in the data under question, but I don´t know if it suffices on its own”? There may well be specific situations in which all we want to know if some conditions are causally relevant subsets of sufficient conditions or not. But I find it misleading to claim that this is the only legitimate or even the main research interest of studies using QCA. I can think of many situations, such as public health crises or enforcing EU law, in which reliably achieving or preventing an outcome would have priority.

Let me be clear. The problem we are talking about is really neither QCA nor some solution type. The elephant in the room is essentially that observational data are rarely perfect and do not obey to the laws of logic. But is QCA-PS really the best, or the only, or at all, a way out of this problem?

Point of agreement 3: There are promising and less promising strategies for causal assessment

The technical moment of QCA shares with statistical techniques that it is simply a cross-case comparison of data-set observations. As such, of course it also shares with other methods the limited possibility for directly deriving causal inferences from observational data. Most QCA scholars would therefore be very cautious to interpret QCA results causally when using observational data and in the presence of limited diversity. Obviously, set relation does not equal causation. How then, could a specific minimization algorithm alone plausibly facilitate causal interpretability?

QCA (as opposed to CNA) was always designed to be a multi-method approach. This means that the inferences of the cross-case comparison are not just interpreted as such, but strengthened and complemented with additional insights, usually theoretical, conceptual and case knowledge. Or, as Ragin (2008: 173) puts it:

“Social research (…) is built upon a foundation of substantive and theoretical knowledge, not just methodological technique”.

This way, we can combine the advantages offered by different methods and sources. Used in a formalized way, the combination of QCA with process tracing can even help to disentangle causally relevant from causally irrelevant conditions. This, of course, does not preclude the possibility that some solution types may lend themselves more to causal interpretation than others. It does suggest, though, that focusing on specific solution types alone is an ill-suited strategy for making valid causal assessments.

Point of disagreement: Nobody assumes that “everything matters”

Allow me to disagree that an approach emphasizing substantive interpretability assumes “everything is relevant”. Of course that is nonsense. Like with any other social science method I know, the researcher first screens the literature and field in order to identify potentially relevant explanatory factors. The logic of truth table analysis (as opposed to CNA?) is then to start out with the subset of these previously identified conditions that themselves consistently are a subset of the outcome set, and then it searches for evidence that they are irrelevant. This is not even an assumption, and it is very far from being “everything”.

Ways ahead

In my view it makes sense to have a division of labour: Users follow protocols, methodologists foster methodological innovation and progress. I hope the above has made it clear that we are in the midst of, in my view, welcome and needed debate about what “correctness” and validity” means in the QCA context. I find it useful to think of this as a diversity of approaches to QCA. It is important that researchers reflect about the ontology that underlies their work, but we should avoid making premature conclusions as well.

Currently (but I may be proven wrong) I am thinking that each solution type has its merits and limitations. We can’t eliminate limited diversity, but we can use different solution typos for different purposes. For example, if policymakers seek to avoid investing public money in potentially irrelevant measures, the PS could be best. If they are interested in creating situations that are 100% sure to ensure an outcome (e.g. disease prevention), then the conservative solution is best and the parsimonious solution very risky. If we have strong theoretical knowledge or prior evidence available for counterfactual reasoning, intermediate solutions are best. And so on. From this perspective, it is good that we can refer to different solution types with QCA. It forces researchers to think consciously about what the goal of their analysis is, and how it can be adequately reached. It prevents them from just mechanically running some algorithm on their data.

All of the above is why I agree with the COMPASSS statement that …

“the current state of the art is characterized by discussions between leading methodologists about these questions, rather than by definitive and conclusive answers. It is therefore premature to conclude that one solution type can generally be accepted or rejected as “correct”, as opposed to other solution types”.

QCA solution types and causal analysis

Qualitative Comparative Analysis (QCA) is a relative young research methodology that has been frequently under attack from all corners, often for the wrong reasons. But there is a significant controversy brewing up within the community of people using  set-theoretic methods (of which QCA is one example) as well.

Recently, COMPASSS – a prominent network of scholars interested in QCA – issued a Statement on Rejecting Article Submissions because of QCA Solution Type. In this statement they ‘express the concern … about the practice of some anonymous reviewers to reject manuscripts during peer review for the sole, or primary, reason that the given study chooses one solution type over another’. The ‘solution type’ refers to the procedure used to minimize the ‘truth tables’ which collect the empirical data in QCA (and other set-theoretic) research when there are unobserved combinations of conditions (factors, variables) in the data. Essentially, in cases of missing data (which is practically always) together with the data minimization algorithm the solution type determines the inference you get from the data.

I have not been involved in drawing up the statement (and I am not a member of COMPASSS), and I have not reviewed any articles using QCA recently, so I am not directly involved in this controversy on either side. At the same time, I have been interested in QCA and related methodologies for a while now, I have covered their basics in my textbook on research design, and I remain intrigued both by their promise and their limitations. So I feel like sharing my thoughts on the matter, even if others might have much more experience with QCA.

(1a) First, let me say that no matter what one thinks about the appropriateness of a solution type, no single criterion in isolation should be used to reject manuscripts during anonymous peer review. The reviewer’s recommendation should reflect not only the methodology used, but the original research goal and the types of inferences being made. I can only assume that for COMPASSS to issue such a statement, the problem has been one of systematic rejection due to this one single reason. This is worrisome because the peer review process does not offer possibilities for response, let alone for debate of methodological issues.

(1b) At the same time, if the method that is used is not appropriate for the research goal and does not support the inferences advanced in the manuscript, then rejection is warranted and no further justification is needed.

(2a) So it all depends whether a single solution type should be used in all QCA analyses. In principle, the answer to this question is ‘No’. There are three main types of solutions (parsimonious, complex, and intermediate), and each can be appropriate in different circumstances.

(2b) However, when it comes to causal analysis, my answer is ‘Yes. Only the parsimonious solution should be used to make causal inferences.’ My answer is based on Michael Baumgartner’s analysis (see the 2015 version here), and I will explain why I find it persuasive below. So, if manuscripts make causal claims based on non-parsimonious solution types, I would see that as sufficient grounds for rejection (or rather for revision), unless the authors explicitly subscribe to a very peculiar social ontology in which everything has causal relevance for an outcome unless we have evidence to the contrary (I will explain this below). In my view, the standard ontology is that no factor has causal relevance for an outcome unless we have evidence that it does.

To sum up so far, the COMPASSS people might be right in general, but for the important class of causal analyses they are wrong.  

(3) Why only the parsimonious solution should be used to make causal claims? In short, because the relations of necessity and sufficiency are monotonic (so that if A is sufficient for E, AB is also sufficient, for any arbitrary B). Imagine a causal structure in which the presence of A is necessary and sufficient for the presence of E and B is irrelevant. Further imagine that we only have two empirical observations {ABE} and {aBe} (small letters denotes the absence of the condition/outcome).  This data is incomplete as we have no information on what happens under the logically possible situations {Ab} and {ab} (these would be logical remainders in the truth table), so we have to use some further rules (e.g. solution type) to derive the outcome. The complex solution is AB->E (the presence of both A and B is necessary and sufficient for the outcome E to occur). This solution type assumes that we cannot ignore B: as we have no data on what happens when it is not present, it is prudent to assume that B matters and keep it in the resulting formula (e.g. causal recipe). However, this formula and the conclusion it leads to are wrong because we posited above that B is irrelevant. The parsimonious solution is A->E (the presence of A is necessary and sufficient for the presence of E). The solution has eliminated B making the assumption that B does not matter as this would result in a more parsimonious solution. This is the correct inference in our example.

Our example is simple but in no way contrived (Baumgartner has other, more complex examples in the paper). In fact, we can add any number of factors to A and as long as they do not vary across our cases they will appear to be components of the outcome formula (causal recipe). That is, any random factor can be made to appear as causally relevant in the presence of limited diversity in the cases being studied. In the limit, we can make every aspect of a case appear to be causally relevant for an outcome, if we do not have cases combining factors in a way that makes it possible to disprove their illusory relevance.

You can say that this is only fair. But it would only be appropriate in a world where everything matters for everything else, unless some empirical cases point to the opposite. But such a worldview (ontology) is rare among social scientists (and I have never seen it openly endorsed). Note that the problem is not that we imply a separate independent effect of B on E: worse, the solution implies that B must be present for the effect of A to obtain.

To sum up so far, only the parsimonious solution type can provide causal inference from QCA data under a standard social ontology, because of the monotonicity of the relationships of necessity and sufficiency.

(4) So what is the response of the COMPASSS people to this analysis? In fact, I do not know. To the best of  my knowledge, there has been no published response/critique to Michael Baumgartner’s article. In the Statement, the following arguments are given:

(a) ‘The field of QCA and set-theoretic methods is not quite standardized.’ ‘The field is currently witnessing an ongoing and welcome methodological debate about the correctness of different solution types.’ ‘the current state of the art is characterized by discussions between leading methodologists about these questions’.

All this might be the case, but the point about non-parsimonious solutions deriving faulty causal inferences seems settled, at least until there is a published response that rebukes it. There might be debate, but I have not seen a published response to Baumgartner’s analysis or any other persuasive argument why he is wrong on this point in particular.

(b) ‘users applying [QCA and other set-theoretic] methods who refer to established protocols of good practice must not be made responsible for the fact that, currently, several protocols are being promoted’. Well, users cannot be made responsible, but if the protocols they follow are faulty, their manuscripts cannot be accepted as the analyses would be wrong.

(5) So despite offering no reasons why non-parsimonious solutions are appropriate for causal analysis contra Baumgartner, COMPASSS’ statement finishes with ‘all solutions are empirically valid’.

I am not sure what this means. All solutions cannot be empirically valid as they can point to contradictory conclusions. Either A->E or AB->E; either B is causally relevant or not. Technically, any solution might be valid in light of a set of background assumptions, research goals and analytic procedures (in the sense that both 2+2=4 and 2+2=5 are valid under some assumptions.) But that’s the crux of the matter: if one has causal goals and uses the non-parsimonious solution, then the solution is only valid if one assumes that in the social world everything causes and conditions everything else unless proven otherwise.

To conclude, if a group of researchers have been systematically sabotaging the work of other scholars for the sole fact of using a certain type of solution concept, that’s bad.  But if they have been rejecting manuscripts that have used non-parsimonious solutions to derive causal inferences without clear commitments to an ‘everything matters’ worldview, that seems OK to me, in light of the (published) methodological state of the art.

P.S. The issue of counterfactuals enters this debate quite often.
(a) But in his 2015 analysis Baumgartner does not evoke his/a regularity theory of causality. All he needs for the analysis is a notion of a cause as a difference-maker, which in my understanding is compatible with a counterfactual understanding of causality. So any rejection of his argument against non-parsimonious solutions cannot be derived from differences between regularity and counterfactual notions of causality.
(b) Baumgartner notes that the parsimonious solution sometimes requires one to make counterfactuals about impossible states of the world. With this critique he motivates abandoning the Quine-McCluskey Boolean minimization procedure (in the framework of which one must choose the parsimonious, complex, or intermediate solution types) altogether and adopting the coincidence analysis framework, which has the parsimonious solution ‘built-in’ in its algorithms. But his is not a critique against counterfactuals as such.
(c) At the same time, the complex solution also relies on assumption-based counterfactuals, namely that a factor matters unless shown otherwise. So the reliance on counterfactuals cannot be used to arbitrate this debate.

For further discussion of these issues, see Thiem and Baumgarnter 2016, Ingo Rohlfing’s blog post (with a response in the comments by Michael Baumgartner), Schneider 2016, the Standards of Good Practice in QCA,

[addendum 31/08/2017] Michael Baumgartner and Alrik Thiem have published a reply to the COMPASSS statement in which they write: ‘We endorse the prerogative of journal editors and reviewers to favor rejection if they come to the conclusion that a manuscript does not merit publication because of its choice of an unsuitable solution type.’ And they urge for more debate.

Want to learn more about Qualitative Comparative Analysis? Start with any of these: Rihoux and Ragin (2008),  Schneider and Wagemann (2012), Thiem and Dusa (2012)Ragin (2000)Ragin (2014).

Is there an East-West divide in the European Union?

From the way the media reports on European Union negotiations, it is easy to get the impression that there is a rift between East and West European member states, and that the enlargement of the EU  has compromised the EU’s decision-making capacity. In a text published at the EUROPP blog, I argue that there is no systematic evidence to support such claims:

“…in fact, against all odds, since 2004 the EU has managed to accommodate and integrate without much turbulence 13 new member states within its decision-making structures. This success is most remarkable and provides an important lesson for the future; a lesson that should not be overshadowed by the forthcoming exit of the United Kingdom from the EU.”

Read the whole thing; it has pretty pictures, too.

Is interpretation descriptive or explanatory?

One defining feature of interpretivist approaches to social science is the idea that the goal of analysis is to provide interpretations of social reality rather than law-based explanations. But of course nobody these days believes in law-based causality in the social world anyways, so the question whether interpretation is to be understood as purely descriptive or as explanatory remains. Here is what I wrote about this issue for an introductory chapter on research design in political science. The paragraph, however, will need to be removed from the text to make the chapter shorter, so I post it here instead. I will be glad to see opinions from scholars who actually work with interpretivist methodologies:

It is difficult to position interpretation (in the narrow sense of the type of work interpretivist political scientists engage in) between description and explanation. Clifford Geertz notes that (ethnographic) description is interpretive (Geertz 1973: 20), but that still leaves the question whether all interpretation is descriptive open. Bevir and Rhodes (2016) insist that intepretivists reject a ‘scientific concept of causation’, but suggest that we can explain actions as products of subjective reasons, meanings, and beliefs. In addition, intentionalist explanations are to be supported by ‘narrative explanations’. In my view, however, a ‘narrative’ that ‘explains’ by relating actions to beliefs situated in a historical context is conceptually and observationally indistinguishable from a ‘thick description’, and better regarded as such.

Olympic medals, economic power and population size

The 2016 Rio Olympic games being officially over, we can obsess as much as we like with the final medal table, without the distraction of having to actually watch any sports. One of the basic questions to ponder about the medal table is to what extent Olympic glory is determined by the wealth, economic power and population size of the countries.

Many news outlets quickly calculated the ratios of the 2016 medal count with economic power and population size per country and presented the rankings of ‘medals won per billion of GDP’ and ‘medals won per million of population’ (for example here and here). But while these rankings are fun, they give us little idea about the relationships between economic power and population size, on the one hand, and Olympic success, on the other. Obviously, there are no deterministic links, but there could still be systematic relationships. So let’s see.

Data

I pulled from the Internet the total number of medals won at the 2016 Olympic games and assigned each country a score in the following way: each country got 5 points for a gold medal, 3 points for silver, and 1 point for bronze. (Different transformations of medals into points are of course possible.) To measure wealth and economic power, I got the GDP (at purchasing power parity) estimates for 2015 provided by the International Monetary Fund, complemented by data from the CIA Factbook (both sets of numbers available here). For population size, I used the Wikipedia list available here.

Olympic medals and economic power

The plot below shows how the total medal points (Y-axis) vary with GDP (X-axis). Each country is represented by a dot (ok, by a snowflake), and some countries are labeled. Clearly, and not very surprisingly, countries with higher GDP have won more medals in Rio. What is surprising however, is that the relationship is not too far from linear: the red line added to the plot is the OLS regression line, and it turns out that this line summarizes the relationship as well (or as badly) as other, more flexible alternatives (like the loess line shown on the plot in grey). The estimated linear positive relationship implies that, on average, each 1,000 billion of GDP bring about 16 more medal points (so ~315 billion earns you another gold medal).olymp1

The other thing to note from the plot is that the relationship is between medal points and total GDP, thus not GDP per capita. In fact, GDP per capita, which measures the relative wealth of a country, has a much weaker relationship with Olympic success with a number of very wealthy, and mostly very small, countries getting zero medals. The correlation of Olympic medal points with GDP is 0.80, while with GDP per capita is 0.21. So it is absolute and not relative wealth that matters more for Olympic glory. This would seem to make sense as it is not money but people who compete at the games, and you need a large pool of contenders to have a chance. But let’s examine more closely whether and how does population size matter.

Olympic medals and population size

The following plot shows how the number of 2016 Rio medal points earned by each country varies with population size. Overall, the relationship is positive, but it is not quite linear, and it is not very consistent (the correlation is 0.40). Some very populous countries, like India, Indonesia, and Pakistan have won very few medals, and some very small ones have won at least one. The implied effect of population size is also small in substantive terms: each 10 million people are associated with 1 more medal point (so, a bronze); for reference three quarters of the countries in the dataset have less than 25 million inhabitants.

olymp2

Putting everything together

Now, we can put both GDP and population size in the same statistical model with the aim of summarizing the observed distribution of medal points as best as we can. In addition to these two predictors, we can add an interaction between the two, as well as different non-linear transformations of the individual predictors. In fact, the possibilities for modeling are quite a few even with only two predictors, so we have to pick a standard for selecting the best model. As the goal is to describe the distribution of medal points, it makes sense to use the sum of the errors (the absolute values of the differences between the actual and predicted medal score for each country) that the models make as a benchmark.

I find that two models describe the data almost equally well. Both use simple OLS linear regression. The first one features population size, GDP, and GDP squared. In this multivariate model, population size turns out to have a negative relationship with Olympic success, net of economic power. GDP has a positive relationship, but the quadratic term implies that the effect is not truly linear but declines in magnitude with higher values of GDP. The substantive interpretation of this model is something along these lines: Olympic success increases at a slightly declining rate with the economic power of a country, but given a certain level of economic power, less populous countries do better. The sum of errors of Model 1 is 1691 medal points.

The second model is similar, but instead of the squared term for GDP it features an interaction between GDP and population size. The interaction turns out to be negative. This implies that economically powerful but populous countries do less well than their level of GDP alone would suggest. This interpretation is a bit strange as population size is positively associated with GDP and seems to suggest that it is relative wealth (GDP per capita) that matters, but this turns out not to be the case, as any model that features GDP per capita has a bigger sum of errors than either Model 1 or Model 2.

Model 1 Model 2
Population size – 0.20 – 0.09
GDP + 0.04 + 0.03
GDP squared – 0.00000008 /
GDP*Population / -0.0000008
Sum of errors 1691 1678
Adjusted R-squared 0.83 0.81

Both models presented so far are linear which is not entirely appropriate given that the outcome variable – medal points – is constrained to be non-negative and is not normally distributed. The models actually predict that some countries, like Kenya, should get a negative number of medal points, which is clearly impossible. To remedy that, we can use statistical models specifically developed for non-negative (count) data: Poisson, negative binomial, or even hurdle or zero-inflated models that can account for the excess number of countries with no medal points at all. I spend a good deal of time experimenting with these models, but I didn’t find any that improved at all on the simple linear models described above (it is actually quite hard even evaluating the performance of these non-linear models). Let me know if you find a different model that does better than the ones reported here. (But please no geographical dummies or past Olympic performance measures; also, the Olympic delegation size would be a mediator so not a proper predictor).

The one model I can find that outperforms the simple OLS regressions is a generalized additive model (GAM) with a flexible form for the interaction. This model has a sum of errors of 1485, and the interaction surface looks like this:interactionGDPpop

In conclusion, do the population size, economic power and wealth of countries account for their success at the 2016 Olympic games? Yes, to a large extent. It is economic power and not relative wealth that matters more, and population size actually has a negative effect once economic power is taken into account. So the relationships are rather complex and, to remind, far from deterministic.

 

Here is the data (text file): olypm. Let me know if you interested in the R script for the analysis, and I will post it.
Finally, here is a ranking of the countries by the size of the model error (based on Model 2; negative predictions have been replaced with zero). This can be interpreted in the following way: the best way to summarize the distribution of medal points won at the 2016 Rio Olympic games as a function of population size and GDP is the model described above. This model implies a prediction for each country. The ones that outperform their model predictions have achieved more than their level of GDP and economic size imply. The ones with negative errors underperform in the sense that they have achieved less than their level of GDP and economic size imply.

country 2016 medals 2016 medal points predicted medal points model error
Great Britain 67 221 68 153
Russia 56 168 87 81
Australia 29 83 30 53
France 42 118 68 50
Kenya 13 49 0 49
New Zealand 18 52 4 48
Hungary 15 53 6 47
Netherlands 19 65 22 43
Jamaica 11 41 0 41
Croatia 10 36 2 34
Cuba 11 35 2 33
Azerbaijan 18 36 4 32
Germany 42 130 98 32
Uzbekistan 13 33 2 31
Italy 28 84 54 30
Kazakhstan 17 39 10 29
Denmark 15 35 7 28
Ukraine 11 29 5 24
Serbia 8 24 2 22
North Korea 7 21 0 21
Sweden 11 31 12 19
Belarus 9 21 4 17
Ethiopia 8 16 0 16
Georgia 7 17 1 16
South Korea 21 63 47 16
China 70 210 195 15
South Africa 10 30 15 15
Armenia 4 14 0 14
Greece 6 20 7 13
Slovakia 4 16 4 12
Spain 17 53 41 12
Colombia 8 24 14 10
Czech Republic 10 18 8 10
Slovenia 4 12 2 10
Switzerland 7 23 13 10
Bahamas 2 6 0 6
Bahrain 2 8 2 6
Ivory Coast 2 6 0 6
Belgium 6 18 13 5
Fiji 1 5 0 5
Kosovo 1 5 0 5
Tajikistan 1 5 0 5
Lithuania 4 6 2 4
Burundi 1 3 0 3
Grenada 1 3 0 3
Jordan 1 5 2 3
Mongolia 2 4 1 3
Niger 1 3 0 3
Puerto Rico 1 5 2 3
Bulgaria 3 5 3 2
Canada 22 44 43 1
Moldova 1 1 0 1
Romania 5 11 10 1
Vietnam 2 8 7 1
Afghanistan 0 0 0 0
American Samoa 0 0 0 0
Andorra 0 0 0 0
Antigua and Barbuda 0 0 0 0
Aruba 0 0 0 0
Barbados 0 0 0 0
Belize 0 0 0 0
Benin 0 0 0 0
Bermuda 0 0 0 0
Bhutan 0 0 0 0
British Virgin Islands 0 0 0 0
Burkina Faso 0 0 0 0
Cambodia 0 0 0 0
Cameroon 0 0 0 0
Cape Verde 0 0 0 0
Cayman slands 0 0 0 0
Central African Republic 0 0 0 0
Chad 0 0 0 0
Comoros 0 0 0 0
Congo 0 0 0 0
Cook Islands 0 0 0 0
Djibouti 0 0 0 0
Dominica 0 0 0 0
DR Congo 0 0 0 0
Eritrea 0 0 0 0
Estonia 1 1 1 0
Gambia 0 0 0 0
Guam 0 0 0 0
Guinea 0 0 0 0
Guinea-Bissau 0 0 0 0
Guyana 0 0 0 0
Haiti 0 0 0 0
Honduras 0 0 0 0
Iceland 0 0 0 0
Kiribati 0 0 0 0
Kyrgyzstan 0 0 0 0
Laos 0 0 0 0
Lesotho 0 0 0 0
Liberia 0 0 0 0
Liechtenstein 0 0 0 0
Madagascar 0 0 0 0
Malawi 0 0 0 0
Maldives 0 0 0 0
Mali 0 0 0 0
Malta 0 0 0 0
Marshall Islands 0 0 0 0
Mauritania 0 0 0 0
Micronesia 0 0 0 0
Monaco 0 0 0 0
Montenegro 0 0 0 0
Mozambique 0 0 0 0
Nauru 0 0 0 0
Nepal 0 0 0 0
Nicaragua 0 0 0 0
Palau 0 0 0 0
Palestine 0 0 0 0
Papua New Guinea 0 0 0 0
Poland 11 25 25 0
Rwanda 0 0 0 0
Saint Kitts and Nevis 0 0 0 0
Saint Lucia 0 0 0 0
Samoa 0 0 0 0
San Marino 0 0 0 0
Sao Tome and Principe 0 0 0 0
Senegal 0 0 0 0
Seychelles 0 0 0 0
Sierra Leone 0 0 0 0
Solomon Islands 0 0 0 0
Somalia 0 0 0 0
South Sudan 0 0 0 0
St Vincent and the Grenadines 0 0 0 0
Suriname 0 0 0 0
Swaziland 0 0 0 0
Tanzania 0 0 0 0
Timor-Leste 0 0 0 0
Togo 0 0 0 0
Tonga 0 0 0 0
Trinidad and Tobago 1 1 1 0
Tunisia 3 3 3 0
Tuvalu 0 0 0 0
Uganda 0 0 0 0
US Virgin Islands 0 0 0 0
Vanuatu 0 0 0 0
Yemen 0 0 0 0
Zambia 0 0 0 0
Zimbabwe 0 0 0 0
Albania 0 0 1 -1
Bangladesh 0 0 1 -1
Bolivia 0 0 1 -1
Bosnia and Herzegovina 0 0 1 -1
Botswana 0 0 1 -1
Brunei 0 0 1 -1
Cyprus 0 0 1 -1
El Salvador 0 0 1 -1
Equatorial Guinea 0 0 1 -1
FYR Macedonia 0 0 1 -1
Gabon 0 0 1 -1
Ghana 0 0 1 -1
Ireland 2 6 7 -1
Latvia 0 0 1 -1
Mauritius 0 0 1 -1
Namibia 0 0 1 -1
Paraguay 0 0 1 -1
Sudan 0 0 1 -1
Syria 0 0 1 -1
Costa Rica 0 0 2 -2
Dominican Rep. 1 1 3 -2
Guatemala 0 0 2 -2
Libya 0 0 2 -2
Luxembourg 0 0 2 -2
Panama 0 0 2 -2
Turkmenistan 0 0 2 -2
Uruguay 0 0 2 -2
Angola 0 0 3 -3
Lebanon 0 0 3 -3
Myanmar 0 0 3 -3
Ecuador 0 0 4 -4
Morocco 1 1 5 -4
Sri Lanka 0 0 4 -4
Argentina 4 18 23 -5
Finland 1 1 6 -5
Israel 2 2 7 -5
Oman 0 0 5 -5
Qatar 1 3 8 -5
Thailand 6 18 23 -5
Norway 4 4 10 -6
Portugal 1 1 7 -6
Algeria 2 6 13 -7
Brazil 19 59 66 -7
Malaysia 5 13 20 -7
Venezuela 3 5 12 -7
Iran 8 22 30 -8
Pakistan 0 0 8 -8
Peru 0 0 8 -8
Philippines 1 3 11 -8
Singapore 1 5 13 -8
Austria 1 1 11 -10
Chile 0 0 10 -10
Hong Kong 0 0 11 -11
Nigeria 1 1 13 -12
India 2 4 17 -13
Iraq 0 0 13 -13
Japan 41 105 119 -14
U.A.E. 1 1 18 -17
Egypt 3 3 21 -18
Turkey 8 18 37 -19
Chinese Taipei 3 7 29 -22
Mexico 5 11 49 -38
Indonesia 3 11 51 -40
Saudi Arabia 0 0 44 -44
United States 121 379 431 -52