Following up my post on QCA solution types and their appropriateness for causal analysis, Eva Thomann was kind enough to provide a reply. I am posting it here in its entirety :
Why I still don’t prefer parsimonious solutions (Eva Thomann)
Thank you very much, Dimiter, for issuing this blog debate and inviting me to reply. In your blog post, you outline why, absent counterevidence, you find it justified to reject applied Qualitative Comparative Analysis (QCA) paper submission that do not use the parsimonious solution. I think I agree with some but not all of your points. Let me start by clarifying a few things.
Point of clarification 1: COMPASSS statement is about bad reviewer practice
It´s good to see that we all seem to agree that “no single criterion in isolation should be used to reject manuscripts during anonymous peer review”. The reviewer practice addressed in the COMPASSS statement is a bad practice. Highlighting this bad reviewer practice is the sole purpose of this statement. Conversely, the COMPASSS statement does not take sides when it comes to preferring specific solution types over others. The statement also does not imply anything about the frequency of this reviewer practice – this part of your post is pure speculation. Personally I have heard people complaining about getting papers rejected for promoting or using conservative (QCA-CS), intermediate (QCA-IS) and parsimonious solutions (QCA-PS) with about the same frequency. But it is of course impossible for COMPASSS to get a representative picture of this phenomenon.
The term “empirically valid” refers to the, to my best knowledge entirely undisputed fact that all solution types are (at least) based on the information contained in the empirical data. The question that´s disputed is how we can or should go “beyond the facts” in causally valid ways when deriving QCA solutions.
Having said this, I will take off my “hat” as a member of the COMPASSS steering committee and contribute a few points to this debate. These points represent my own personal view and not that of COMPASSS or any of its bodies. I write as someone who uses QCA sometimes in her research and teaches it, too. Since I am not a methodologist, I won´t talk about fundamental issues of ontology and causality. I hope others will jump in on that.
Point of clarification 2: There is no point in personalizing this debate
In your comment you frequently refer to “the COMPASSS people”. But I find that pointless: COMPASSS hosts a broad variety of methodologists, users, practitioners, developers and teachers with different viewpoints and of different “colours and shapes”, some persons closer to “case-based” research, other closer to statistical/analytical research. Amongst others, Michael Baumgartner whom you mention is himself a members of the advisory board and he has had methodological debates with his co-authors as well. Just because we can procedurally agree on a bad reviewer practice, it neither means we substantively agree on everything, nor does it imply that we disagree. History has amply shown how unproductive it can be for scientific progress when debates like these become personalized. Thus, if I could make a wish to you and everyone else engaging in this debate, it would be to talk about arguments rather than specific people. In what follows I will therefore refer to different approaches instead unless when referring to specific scholarly publications.
Point of clarification 3: There is more than one perspective on the validity of different solutions
As to your earlier point which you essentially repeat here, that “but if two solutions produce different causal recipes, e.g. (1) AB-> E and (2) ABC-> E it cannot be that both (1) and (2) are valid”, my answer is: it depends on what you mean with “valid”.
It is common to look at QCA results as subset relations, here: statements of sufficiency. In a paper that is forthcoming in Sociological Methods & Research, Martino Maggetti and I call this the” approach emphasizing substantive interpretability”. From this perspective, the forward arrow “->2 reads “is sufficient for” and 1) in fact implies 2). Sufficiency means that X (here: AB) is a subset of Y (here: E). ABC is a subset of AB and hence it is also a subset of E, if AB is a subset of E. Logic dictates that any subset of a sufficient condition is also sufficient. Both are valid – they describe the sufficiency patterns in the data (and sometimes, some remainders) with different degrees of complexity.
Scholars promoting an “approach emphasizing redundancy-free models” agree with that, if we speak of mere (monotonic) subset relations. Yet they require QCA solutions to be minimal statements of causal relevance. From this perspective, the arrow (it then is <->, see below) reads “is causally relevant for” and if 1) then 2) cannot be true: 2) additionally grants causal relevance to C, but in in 1) we said only AB are causally relevant. As a causal statement, we can think of 2) claiming more than 1).
To proponents of the approach emphasizing substantive interpretability (and I am one of them), it all boils down to the question:
“Can something be incorrect that follows logically and inevitably from a correct statement? “
Their brains shout:
“No, of course it can’t! “
I am making an informed guess here: this fact is so blatantly obvious to most people well-versed in set theory that it does not require a formal reply.
For everyone else, it is important to understand that in order to follow the reasoning you are proposing in your comment, you have to buy into a whole set of assumptions that underlie the method promoted in the publication you are referring to (Baumgartner 2015), called Coincidence Analysis or CNA. Let me illustrate this.
Point of clarification 4: QCA is not CNA
In fact, one cannot accept 2) if 1) is true in the special case when the condition “AB” is both minimally sufficient and contained in a minimally necessary condition for an outcome – which is also the situation you refer to (in your point 3). We have to replace the forward arrow “->” with “<->”.In such a situation, the X set and the Y set are equivalent. Of course, if AB and E are equivalent, then ABC and E are not equivalent at the same time. In reality, this – simultaneous necessity and sufficiency– is a rare scenario that requires a solution to be maximally parsimonious and having both a high consistency (indicating sufficiency) AND a very high coverage (indicating necessity).
But QCA – as opposed to CNA – is designed to assess necessary conditions and / or sufficient conditions. They don´t have to be both. As soon as we are speaking of a condition that is sufficient but not necessary (or not part of a necessary condition), then, if 1) is correct, 2) also has to be correct. You are acknowledging this when saying that “if A is sufficient for E, AB is also sufficient, for any arbitrary B”.
I will leave it to the methodologists to clarify whether it is ontologically desirable to empirically analyse sufficient but not necessary (or necessary but not sufficient) conditions. As a political scientist, I find it theoretically and empirically interesting. I believe this is in the tradition of much comparative political research. It is clear, and you seem to agree, that what we find to be correct entirely depends on how we define “correct” – there´s a danger of circularity here. At this point in time, it has to be pointed out that CNA is not QCA. Both are innovative, elegant and intriguing methods with their own pro’s and con’s. I am personally quite fascinated by CNA and would like to see more applications of it, but I am not convinced that we can or need to transfer its assumptions to QCA.
What I like about the recent publications advocating an approach emphasizing redundancy-free models is that they highlight that not all conditions contained in QCA solutions may be causally interpretable, if only we knew the true data-generating process (DGP). That points to the general question of causal arguments made with QCA if there is limited diversity, which has received ample scholarly attention for already quite a while.
Point of agreement 1: We need a cumulative process of rational critique
You argue that “the point about non-parsimonious solutions deriving faulty causal inferences seems settled, at least until there is a published response that rebukes it”. But QCA scholars have long highlighted issues of implausible and untenable counterfactuals entailed in parsimonious solutions (e.g. here, here, here, here, here, here and here). None of the published articles advocating redundancy-free models has so far made concrete attempts to rebuke these arguments. Following your line of reasoning, the points made by previous scholarship about parsimonious solutions deriving faulty causal inferences equally seems settled, at least until there is a published response that rebukes these points.
Indeed, advocates of redundancy-free models seem to either dismiss the relevance of counterfactuals altogether because CNA, so it is argued, does not rely on counterfactuals to derive solutions; OR they argue, that in the presence of limited diversity all solutions rely on counterfactuals. (Wouldn´t it be contradictory to argue both?). I personally would agree with the latter point. There can be no doubt that QCA (as opposed, perhaps, to CNA) is a set-theoretic, truth table based methods that, in the presence of limited diversity, involves counterfactuals. Newer algorithms (such as eQMC, used in the QCA package for R) no longer actively “rely on” remainders for minimization, and they exclude difficult and untenable counterfactuals rather than including tenable and “easy” counterfactuals. But the reason why QCA involves counterfactuals keeps being that intermediate and parsimonious QCA solutions involve configurations of conditions some of which are empirically observed, while others (the counterfactuals) are not. There can be only one conclusion: that the question of whether these counterfactuals are valid requires our keen attention.
Where does that leave us? To me, all that does certainly not mean that “the reliance on counterfactuals cannot be used to arbitrate this debate”. It means that different scholars have highlighted different issues relating to the validity of all solution types. None of these points have been conclusively rebuked so far. That, of course, leaves users in an intricate situation. They should not be punished for consistently and correctly following protocols proposed by methodologists of one or another approach.
Point of agreement 2: In the presence of limited diversity, QCA solutions can err in different directions
Parsimonious solutions are by no means unaffected by the problem that limited empirical diversity challenges our confidence in inferences. Indeed we should be careful not to omit that they err, too. As Ingo Rohlfing has pointed out, the question in which direction we want to err is a different question than the one which solution is correct. The answer to this former question probably depends.
Let us return to the above example and assume that we have a truth table characterized by limited diversity. We get a conservative solution
(CS) ABC -> E,
and a parsimonious solution
(PS) A -> E.
Let us further assume that we know (which in reality we never do) that the true DGP is
(DGP) AB -> E.
Neither CS nor PS give us the true DGP. To recap: To scholars emphasizing redundancy-free models, PS is “correct” because they define as “correct” a solution that does not contain causally irrelevant conditions. But note that PS here is also incomplete: the true result in this example is that, in order to observe the outcome E, A alone is not enough, it has to combine with B. Claiming that A alone is enough involves a counterfactual that could well be untenable. But the evidence alone does not allow us to conclude that B is irrelevant for E. It is usually only by making this type of oversimplifications that parsimonious solutions reach the high coverage values required to be “causally interpretable” under an approach emphasizing redundancy-free models.
To anyone with some basic training in QCA, this should raise some serious questions: But isn´t one of the core assumptions of QCA that we cannot interpret the single conditions in its results in isolation because they unfold their effect only in combination with other conditions? How, then, does QCA-PS fare when assessed against this assumption? I have not read a conclusive answer to this question yet.
Baumgartner and Thiem (2017) point out that with imperfect data, no method can be expected to deliver complete results. That may well be, but in QCA we deal with two types of completeness: complete AND-configurations, or including all substitutable paths or “causal recipes” combined with the logical OR. In order to interpret a QCA solution as a sufficient condition, I want to be reasonably sure that the respective AND-configuration in fact reliably implies the outcome (even if it omits other configurations that may not have been observed in my data). Using this criterion, QCA-PS arguably fares worst (it most often misses out on causally relevant factors) and QCA-CS fares best (though it most often also still includes causally irrelevant factors).
To be sure, QCA-PS is sufficient for the outcome in the dataset under question. But I am unsure how I have to read it: “either X implies Y, or I did not observe X”? Or “X is causally relevant for Y in the data under question, but I don´t know if it suffices on its own”? There may well be specific situations in which all we want to know if some conditions are causally relevant subsets of sufficient conditions or not. But I find it misleading to claim that this is the only legitimate or even the main research interest of studies using QCA. I can think of many situations, such as public health crises or enforcing EU law, in which reliably achieving or preventing an outcome would have priority.
Let me be clear. The problem we are talking about is really neither QCA nor some solution type. The elephant in the room is essentially that observational data are rarely perfect and do not obey to the laws of logic. But is QCA-PS really the best, or the only, or at all, a way out of this problem?
Point of agreement 3: There are promising and less promising strategies for causal assessment
The technical moment of QCA shares with statistical techniques that it is simply a cross-case comparison of data-set observations. As such, of course it also shares with other methods the limited possibility for directly deriving causal inferences from observational data. Most QCA scholars would therefore be very cautious to interpret QCA results causally when using observational data and in the presence of limited diversity. Obviously, set relation does not equal causation. How then, could a specific minimization algorithm alone plausibly facilitate causal interpretability?
QCA (as opposed to CNA) was always designed to be a multi-method approach. This means that the inferences of the cross-case comparison are not just interpreted as such, but strengthened and complemented with additional insights, usually theoretical, conceptual and case knowledge. Or, as Ragin (2008: 173) puts it:
“Social research (…) is built upon a foundation of substantive and theoretical knowledge, not just methodological technique”.
This way, we can combine the advantages offered by different methods and sources. Used in a formalized way, the combination of QCA with process tracing can even help to disentangle causally relevant from causally irrelevant conditions. This, of course, does not preclude the possibility that some solution types may lend themselves more to causal interpretation than others. It does suggest, though, that focusing on specific solution types alone is an ill-suited strategy for making valid causal assessments.
Point of disagreement: Nobody assumes that “everything matters”
Allow me to disagree that an approach emphasizing substantive interpretability assumes “everything is relevant”. Of course that is nonsense. Like with any other social science method I know, the researcher first screens the literature and field in order to identify potentially relevant explanatory factors. The logic of truth table analysis (as opposed to CNA?) is then to start out with the subset of these previously identified conditions that themselves consistently are a subset of the outcome set, and then it searches for evidence that they are irrelevant. This is not even an assumption, and it is very far from being “everything”.
Ways ahead
In my view it makes sense to have a division of labour: Users follow protocols, methodologists foster methodological innovation and progress. I hope the above has made it clear that we are in the midst of, in my view, welcome and needed debate about what “correctness” and validity” means in the QCA context. I find it useful to think of this as a diversity of approaches to QCA. It is important that researchers reflect about the ontology that underlies their work, but we should avoid making premature conclusions as well.
Currently (but I may be proven wrong) I am thinking that each solution type has its merits and limitations. We can’t eliminate limited diversity, but we can use different solution typos for different purposes. For example, if policymakers seek to avoid investing public money in potentially irrelevant measures, the PS could be best. If they are interested in creating situations that are 100% sure to ensure an outcome (e.g. disease prevention), then the conservative solution is best and the parsimonious solution very risky. If we have strong theoretical knowledge or prior evidence available for counterfactual reasoning, intermediate solutions are best. And so on. From this perspective, it is good that we can refer to different solution types with QCA. It forces researchers to think consciously about what the goal of their analysis is, and how it can be adequately reached. It prevents them from just mechanically running some algorithm on their data.
All of the above is why I agree with the COMPASSS statement that …
“the current state of the art is characterized by discussions between leading methodologists about these questions, rather than by definitive and conclusive answers. It is therefore premature to conclude that one solution type can generally be accepted or rejected as “correct”, as opposed to other solution types”.
The comment by Eva Thomann on the initial blog post by Dimiter Toshkov shows, yet again, that the signers of the recent “COMPASSS Statement on Rejecting Article Submissions because of QCA Solution Type”, instead of churning out ever more convolutions and contradictions, should first come to a halt, and examine QCA’s epistemological foundations a little more thoroughly; they should have a much closer look at QCA’s innermost mechanics, and they should finally come to an internal agreement on what the method’s actual search target is instead of wriggling themselves out every time they are asked what QCA actually does in the end, causal data analysis or something else.
I have been in the QCA-business for more than seven years now, as a 100% full-time methodologist, researching this method and teaching it nationally and internationally, and publishing on it in numerous journals. What I still see, however, sends shivers down my spine. Would it not be for the fact that QCA is now also employed in human-sensitive areas such as public health, I probably would not have cared too much about Eva’s comment here, and rather invest my time in other things, but the seriousness of the situation demands that I do whatever I can, as a scientist whose foremost interest is scientific advance, to help applied researchers not fall into the many traps (and even pay for it at summer schools) Eva and her colleagues create.
Eva’s comment is very long, hence only the following two examples here from Eva’s comment that exemplify the inconsistencies in her arguments:
1.) Eva, who admits that she is no methodologist, writes in an otherwise very methodologically bold way, “There can be no doubt that QCA […] is a set-theoretic, truth table based methods that, in the presence of limited diversity, involves counterfactuals. Newer algorithms (such as eQMC, used in the QCA package for R) no longer actively “rely on” remainders for minimization, and they exclude difficult and untenable counterfactuals rather than including tenable and “easy” counterfactuals. How awfully wrong this is can easily be verified by reading my article in the Journal of Mathematical Sociology, where I explain how eQMC works. In fact, eQMC does not work via remainders at all, and thus cannot exclude any counterfactuals whatsoever (see entry #11 at http://www.alrik-thiem.net/about/).
2.) Or she writes that it is an “entirely undisputed fact that all solution types are (at least) based on the information contained in the empirical data.” That is absolute non-sense. It is impossible to argue that QCA identifies INUS conditions based on the empirical data at hand when one produces the conservative or intermediate solution. A simple yet telling example is available in my article “Beyond the facts: Limited empirical diversity and the incorrectness of Qualitative Comparative Analysis”. Thus, the implication for Eva would be having to abandon the INUS theory, but which theory of causation does QCA then work with instead? Ragin at least argues that QCA is based on the INUS theory (http://smr.sagepub.com/cgi/content/abstract/36/4/431, p.431-432).
I could easily extend this list with at least half a dozen additional problems, but I would like to end this comment with a thought related to the COMPASSS statement, which Eva supports unhesitatingly. I find it worrying, if not scientifically scandalous, to see a group of researchers argue that it is “bad reviewer practice” when reviewers recommend rejection based on formal arguments that Michael Baumgartner and I have published. Bad reviewer practice may be not to read a manuscript thoroughly, or to reject a manuscript based on political considerations rather than scientific ones, but rejecting manuscripts on the basis of scientific, published arguments is unquestionably an undisputed freedom in all scientific environments, except that, it seems, of COMPASSS. COMPASSS should respect that freedom, and instead devote themselves to publishing persuasive arguments that counter those against conservative and intermediate solutions Michael Baumgartner and I have presented.
Having read all these arguments, I would like to intervene in this debate, to clarify a couple of things.
I have to say, not only as a member of the Management Team at COMPASSS, but also as developer of the package QCA in R, that I find myself in much agreement with Eva Thomann’s reply.
To clarify the eQMC matter, Eva Thomann is actually right. In her reply she had a knowledge advantage of having read a working version of my upcoming “The QCA with R book”, with a chapter dedicated to what I call the “pseudo-counterfactual analysis”.
It should not be forgotten that I am the sole author of the eQMC, its first version being published by myself as a COMPASSS working paper in 2007. So it happens that I know for certain how eQMC works, and confirm that Alrik Thiem is mistaken about the use of remainders (hint: the minimization function has an argument called “omit”).
With respect to the alleged incorrectness of the QCA (conservative and intermediate solutions), I have my own contrary opinion. There are solid proofs that I can present anytime in a formal debate, coming from a variety of sources from logically impossible claims to plain programming errors in the R code behind the series of articles. These programming errors are obvious for a trained eye, easy to expose to even a non-technical audience.
The logical impossibilities, corroborated with the programming errors, have the potential to falsify the conclusions emerging from these papers. To be clear and avoid possible misunderstandings, although it is not wise to have a scientific debate using personal oriented arguments, I still have to emphasize the programming errors responsible for the wrong conclusions are all located in package QCApro, not in package CNA. The later has its merits and deserves a proper attention as a worthy member of the QCA family.
Concretely, Adrian Dusa makes 3 claims in his post:
1.) the eQMC algorithm DOES use logical remainders (he supports Eva’s view that eQMC distinguishes between “easy” and “difficult” counterfactuals, which is only possible when drawing on logical remainders).
2.) The articles Michael Baumgartner and I have written, where we demonstrate the incorrectness of conservative and intermediate solutions, are all wrong, for logical or/and programming reasons.
3.) My QCApro software package is responsible for these wrong conclusions, at least with respect to the programming errors alluded to.
RE 1.) It can easily be demonstrated that eQMC does not use logical remainders for minimization. This can be verified either by reading Adrian Dusa’s initial working paper (I quote “The solution proposed in this paper solves this problem [QMC’s problem of becoming increasingly slower with increasing numbers of logical remainders], because finding the positive minimum prime implicants does not involve the remainders at all”). We have specified this further in this paper: http://dx.doi.org/10.1080/0022250X.2014.897949, which I wrote with Adrian Dusa before I left the QCA package. Hence, either Adrian Dusa has now changed the algorithm, in which case it is an algorithm different from eQMC, or he contradicts himself.
RE 2.) Adrian Dusa argues that “[t]here are solid proofs that I can present anytime in a formal debate”. Michael Baumgartner and I have invited the signers of the COMPASSS “Statement on Rejecting Article Submissions because of QCA Solution Type”, to whom Adrian Dusa belongs, to present such proofs to us at the next QCA workshop that is to take place in Zurich this December. So far, we have not received a response to our invitation.
RE 3.) I left the QCA package in 2015 to launch the QCApro package in 2016 because the QCA package applies a minimization procedure by default that mimics the one implemented in fs/QCA. The problem is that this procedure not only hides many viable models from the analyst, and thus suggests neat results when there are in fact none, but this background filtering process of models also creates the risk of eliminating the correct model (the usually unknown data-generating structure). A video on this problem can be watched here: https://youtu.be/n8k4OQY5mHg. Additionally, there are working examples in the QCApro package documentation that demonstrate the problem.
I am going to briefly respond, to hopefully (and completely) clarify matters.
Point no.1
Regarding the eQMC algorithm, I have offered a hint in my previous message which was ignored. The quotes offered by Alrik Thiem are valid, the algorithm has not been changed, and yet I am not contradicting myself.
Instead, a proper attention should have been payed to Eva Thomann’s initial phrase, who clearly refers to: “… exclude difficult and untenable counterfactuals…”. This points the attention to the ESA – Enhanced Standard Analysis, for which eQMC uses the argument “omit” (which in the upcoming version is renamed to “exclude”).
When this argument is provided, eQMC adds (therefore uses) the untenable assuptions to the set of negative output configurations. Although not covered in the joint article, eQMC really does use remainders for ESA since version 1 of the QCA package.
One would have expected, as a former co-author of package QCA, that Alrik Thiem already knew what the argument “omit” is used for, especially since it is offered by his very own forked package QCApro.
Point no.2 and 3 combined:
Not only that I am ready to present the results of my analyses at any formal event, but I am considering to do even more and publish them, to be formally refered to by any user who will have their analyses rejected based on the solution type.
The way I see this situation, there is an entire community of QCA theoreticians and practicians who follow standards, while on the other side there are some people who say everyone is wrong, them being the only ones who know what is “correct”.
Normally, users are free to believe who they want and life goes on, except this time it is actually possible to expose the errors that invalidate those claims.
Switching the readers’ attention to an alleged problem in other software (features, not bugs) does not change the fact there are demonstrable, genuine problems in package QCApro.
I have no idea how many researchers follow this debate here. In any case, I believe this discussion is highly relevant because it signals an important cross roads at which the community (counting in QCA, CNA and all other related techniques) finds itself. Otherwise COMPASSS would not have taken such drastic measures as to openly call on journal editors to ignore certain reviews. Any opportunity to continue this debate, as long as it stays within the bounds of respect, should thus be very much welcomed, be it at conferences, in publications, or in fora and blogs such as this one by Dimiter Toshkov. Now to my reply…
Dear Adrian, I certainly did not ignore your hint. As you will surely remember, the “omit” argument was suggested by myself back then (in 2011) to deal with so-called “contradictory simplifying assumptions”. T/ESA came up in the literature only later (2012/2013). T/ESA has various components, one of which is to declare some remainders as not being sufficient for the outcome, just as the conservative and the intermediate solution of QCA do. Yet that has nothing to do with the debate we’re having here. Any algorithm can be made to work with an argument such as “omit” because that argument directly influences the input to the algorithm, not its procedural protocol.
I have kept many things from the QCA package in my QCApro package because I wanted to still have the opportunity to do such data simulations as in my 2017 Sociological Methods & Research paper (http://journals.sagepub.com/doi/full/10.1177/0049124117701487), where Michael Baumgartner and I show the conservative and intermediate search strategy of QCA to be incorrect. The documentation of QCApro, however, clearly warns applied users not to run conservative and intermediate solutions for analyzing their data. QCApro was always intended as a package for applied users, as well as a package for methodologists analyzing methodological properties of QCA and its various procedures, such as T/ESA (see also my publication in Political Analysis: https://doi.org/10.1093/pan/mpw024).
To reiterate, I would also very much welcome your contribution, and everyone else’s contribution, to the debate around solution types, be it a presentation at a conference or a publication, or both. That there is no consensus ANY LONGER regarding the issue of solution types, contrary what you and your colleagues from the COMPASSS Management Board and Steering Committee suggest, is most vividly demonstrated by the fact that Michael Baumgartner’s and my publications have undergone peer review as well, and have been found to be convincing by many experts.
You see these publications as going against received wisdom, which they absolutely do, yet as therefore erroneous. I see this very differently. I believe that challenging received wisdom, if there are good reasons to do so as argued in the abovementioned publications, is the core driver of scientific advance. Everyone is free to evaluate these arguments, and to eventually buy into or out of them, with all consequences implied by such a decision.
I also do not wish to transform this into a personal debate, but nevertheless feel compelled to make some final clarifications. For the readers.
The fact that X says something does not make that something real, no matter how hard X tries. It is just what X says, and this is valid for both the current debate, and also for all the articles being mentioned.
For instance, the argument “omit” has very much to do with this debate, precisely because “it directly influences the input to the algorithm”. If the remainders are part of the input, it means the algorithm does actually uses them. Whatever is meant by “procedural protocol” doesn’t have much to do with the programming of eQMC (which is completely mine, so I know better) or even logics, for that matter.
If it’s part of the input, it is used. Period.
A published article is not necessarily a true article. It is just an article, and history has witnessed thousands of published articles being proven wrong. The fact these articles have gone through peer review is also not a proof of an ultimate truth. The reviewers should not be blamed for being tricked with faulty programming, they are only reviewing the papers, not the R code behind them. But if the programming is erroneous, as I can demonstrate, the fault is entirely their authors’.
Dear Adrian, it is fine that you keep stressing that our publications are all wrong, but you are still owing an argument to the scientific community. Once that has been presented, we can continue debating if that is ok with you.
In order to avoid misrepresentations, I am reposting the COMPASSS Statement here in its entirety.
Facts include that the statement:
– emphasizes the current lack of broad scientific consensus about the validity of different solution types
– neither refers to any specific publications nor expresses any preference of one solution type over another
– highlights the need for users to explicitly justify their choice of a solution type (naturally based on scientific arguments)
– encourages reviewers to make their (scientific) arguments and concerns clear in their reviews
– condems a reviewer practice of making recommendations that are solely or primarily based on the alleged existence of a standard where there presently is none.
– is signed by world-leading QCA methdologists as well as QCA users and teachers.
“COMPASSS Statement on Rejecting Article Submissions because of QCA Solution Type
14 August 2017
COMPASSS (COMPArative Methods for Systematic cross-caSe analySis) is a worldwide network bringing together scholars and practitioners who share a common interest in theoretical, methodological and practical advancements in a systematic comparative case approach to research which stresses the use of a configurational logic, the existence of multiple causality and the importance of a careful construction of research populations. It was launched in 2003, and its management was re-organized in 2008, 2012 and 2016 to better accommodate the growing needs in the field. COMPASSS comprises three main bodies: an Advisory Board, a Steering Committee and a Management Team, which represent a diverse range of active members of this methodological community.
The field of qualitative comparative and set-theoretic methods is not only in in the midst of a process of constant development and innovation, it also is not characterized by similar levels of standardization as some mainstream statistical techniques. Amongst others, the field is currently witnessing an ongoing and welcome methodological debate about the correctness of different solution types (conservative/complex, intermediate, parsimonious) when applying the methods of Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA) to empirical data. The present statement is to express the concern of COMPASSS about the practice of some anonymous reviewers to reject manuscripts during peer review for the sole, or primary, reason that the given study chooses one solution type over another.
COMPASSS embraces the position that standards are community goods that, by definition, cannot be coerced by individuals or a minority of scholars via anonymous peer review processes. Specifically, COMPASSS rejects the position that a single solution type is always methodologically superior for several reasons. First, the current state of the art is characterized by discussions between leading methodologists about these questions, rather than by definitive and conclusive answers. It is therefore premature to conclude that one solution type can generally be accepted or rejected as “correct”, as opposed to other solution types. Second, users applying these methods who refer to established protocols of good practice must not be made responsible for the fact that, currently, several protocols are being promoted. Such debates should be resolved between methodologists, and not by imposing protocols on users about which no broad scientific consensus has been reached as yet.
COMPASSS wishes to draw the attention of journal editors and authors to this issue. Reviewers are certainly encouraged to make their arguments and concerns clear in their reviews. Yet under the current state of the art, the use of a specific solution type is not an acceptable reason for rejecting a manuscript, especially not in isolation. COMPASSS runs a working paper series with a checklist of practices that speak for the quality of a QCA study. Amongst them is the explicit justification of the choice of a specific solution type, as well as reporting all three solution types, for example, in an appendix. This checklist reflects the scientific consensus that all solutions are empirically valid, that including the full range of solutions is typically best, and that the researcher needs to justify the final choice of solutions(s) that they report.
Signed,
The COMPASSS Management Team
Benoît RIHOUX (Université catholique de Louvain, Belgium):legal representative, Newsletters
Claude RUBINSON (University of Houston-Downtown, USA):managing editor COMPASSS WP series, bibliography, and Website
Samuel DEFACQZ (Université catholique de Louvain, Belgium)
Priscilla ÁLAMOS-CONCHA (Université catholique de Louvain, Belgium)
Elin MONSTAD (University of Bergen, Norway)
The COMPASSS Steering Committee
Sophia SEUNG-YOON LEE (Ewha Womans University, Korea):chair of Steering Committee
Konan Anderson SENY KAN (Toulouse Business School, France):vice-chair of Steering Committee
Adrian DUŞA (University of Bucharest, Romania)
Peer FISS (USC-Marshall School of Business, USA)
Wendy OLSEN (University of Manchester, UK)
Charles C. RAGIN (University of California, Irvine, USA)
Eva THOMANN (University of Exeter, UK)
Claudius WAGEMANN (Goethe University Frankfurt, Germany)”
Dear Eva, is it not a contradiction that you support justifying the “choice of a solution type (naturally based on scientific arguments)” yet at the same time criticize the use of a specific solution type as not being “an acceptable reason for rejecting a manuscript” if there are scientific arguments for doing so?!
Irrespective of how you resolve that contradiction, I find it completely acceptable if a reviewer rejects a manuscript, possibly after a first round of revisions, that claims to do causal data analysis, yet that still uses conservative or intermediate solutions. And I find it highly problematic if you and your colleagues from the COMPASSS Management Board and Steering Committee, without presenting any scientific argument, simply proclaim that “under the current state of the art, the use of a specific solution type is not an acceptable reason for rejecting a manuscript” because you claim for yourself a position of authority that you simply do not have.
Reviewers are free to write whatever they want, and if they are convinced by the soundness of Michael Baumgartner’s and my publications (the COMPASSS statement explicitly mentions the term “correctness”, and this is a term only Michael Baumgartner and I have used in our publications on QCA solution types), and thus see this as state-of-the-art, it is their right to express this freely in their assessment of a submitted manuscript, and finally, their recommendation for revision or rejection. And it is the decision of the journal editor(s) to either use these reviewer reports or not. For these conscientious decisions, neither reviewers nor editors should be reprimanded by COMPASSS.
[…] Some reactions to and discussion of the statement regarding the rejection of manuscripts based on QCA solution type have been published by Michael Baugartner, Alrik Thiem, at Dimiter Toshkov’s blog (25 Aug 2017; 4 Sept 2017). […]
About two years after this thread, I believe that I have found the refutation of Baumgartner and Thiem. Please have a look at my manuscript, here:
https://www.researchgate.net/publication/333603232_Critical_tension_sufficiency_and_parsimony_in_QCA
Comments are naturally more than welcome, I would be very curious to hear from the supporters of B&T.