Intuitions about case selection are often wrong

Imagine the following simple setup: there are two switches (X and Z) and a lamp (Y). Both switches and the lamp are ‘On’. You want to know what switch X does, but you have only one try to manipulate the switches. Which one would you choose to switch off: X, Z or it doesn’t matter?

These are the results of the quick Twitter poll I did on the question:

Clearly, almost half of the respondents think it doesn’t matter, switching X is the second choice, and only 2 out of 15 would switch Z to learn what X does. Yet, it is by pressing Z that we have the best chance of learning something about the effect of X. This seems quite counter-intuitive, so let me explain.

First, let’s clarify the assumptions embedded in the setup: (A1) both switches and the lamp can be either ‘On’ [1 ] or ‘Off’ [0]; (A2) the lamp is controlled only by these switches; there is nothing outside the system that controls its output; (A3) X and Z can work individually or in combination (so that the lamp is ‘On’ only if both switches are ‘On’ simultaneously).

Now let’s represent the information we have in a table:

Switch X Switch Z Lamp Y
1 1 1
0 0 0

We are allowed to make one experiment in the setup (press only one switch). In other words, we can add an observation for one more row of the table. Which one should it be?

Well, let’s see what happens if we switch off X (let’s call this strategy S1). There are two possible outcomes: either the lamp goes off (S1a) or it stays on (S1b).

In the first case (represented as the second line in the table below) we can conclude that X is not necessary for the lamp to be ‘On’, but we do not know whether X can switch on the lamp on its own (whether it is sufficient to do so).

Switch X Switch Z Lamp Y
1 1 1
0 1 1
0 0 0

If the lamp goes off when we press X, we know that X is necessary for the outcome but we do not know whether X can turn on the lamp on its own or only in combination with Z.

Switch X Switch Z Lamp Y
1 1 1
0 1 0
0 0 0

To sum up, by pressing X we learn either that (S1a) X is not necessary or that (S1b) X matters but we do not know whether on its own or only in combination with Z.

 

Now, let’s see what happens if we press Z (strategy S2). Again either the lamp stays on (S2a) or it goes off (S2b).

Under the first scenario, we learn that X is sufficient to turn on the lamp.

Switch X Switch Z Lamp Y
1 1 1
1 0 1
0 0 0

Under the second scenario, we learn that X is not sufficient to turn on the light. It is still possible that it is necessary for turning on the lamp in combination with Z.

Switch X Switch Z Lamp Y
1 1 1
1 0 0
0 0 0

To sum up, by pressing Z we learn either that (S2a) X can turn on the lamp or (S2b) that it cannot turn on the lamp on its own but is possibly necessary in combination with Z. 

Comparing the two sets of inferences, I think it is clear that the second one is much more informative. By pressing Z we learn either that we can turn on the lamp by pressing X or that we cannot unless Z is ‘On’. By pressing X we learn next to nothing: we are either still in the dark whether X works on its own to turn on the lamp (sorry for the pun) or that X matters but we still do now know whether we also need Z to be ‘On’.

If you are still unconvinced, the following table summarizes all inferences under all strategies and contingencies about each of the possible effects (X, Z, and the interaction XZ):

X works on its own Z works on its own Only XZ works Strategy
? True False S1a
? False ? S1b
True ? False S2a
False ? ? S2b

It should be obvious now that we are better off by pressing Z to learn about the effect of X.

Good, but what’s the relevance of this little game? Well, the game resembles a research design situation in which we have one observation (case), we have the resources to add only one more, and we have to select which observation to make. In other words, the game is about case selection.

For example, we observe a case with a rare outcome – say, successful regional integration. We suspect that two factors are at play, both of which are present in the case – say, high trade volume within the integrating block and democratic form of government for all units. And we wanna probe the effect of trade volume in particular. In that case, the analysis above suggests that we should choose a case that has the same volume of trade but a non-democratic form of government, rather than a case which has low volume of trade and democratic form of government.

This result is counter-intuitive, so let’s spell out why. First, note that we are interested in the effect of X (the effect of the switch and of trade volume) and not in explaining Y (how to turn on the lamp or how does regional integration come about). This is a subtle difference in interpretation, but one that is crucial for the analysis. Second, note that we are more interested in the effect of X than in the effect of Z, although both are potential causes of Y. If both X and Z are of equal interest, then obviously it doesn’t matter which one observation we make. Third, the result hinges on the assumption that there is nothing other than X or Z (or their interaction) that matters for Y. Once we admit other possible causal variables in the set-up, then we are no longer better off switching Z to learn the effect of X.

Sooooo, don’t take this little game as general advice on case selection. But it definitely shows that when it comes to research design our intuitions cannot always be trusted.

P.S. One assumption on which the analysis does not depend is binary effects and outcomes: it works equally well with probabilistic effects that are additive or multiplicative (involving an interaction). 

Learn more about research design.

Unit of analysis vs. Unit of observation

Having graded another batch of 40 student research proposals, the distinction between ‘unit of analysis’ and ‘unit of observation’ proves to be, yet again, one of the trickiest for the students to master.

After several years of experience, I think I have a good grasp of the difference between the two, but it obviously remains a challenge to explain it to students. King, Keohane and Verba (1994) [KKV] introduce the difference in the context of descriptive inference where it serves the argument that what often goes under the heading of a ‘case study’ often actually has many observations (p.52, see also 116-117). But, admittedly the book is somewhat unclear about the distinction and unambiguous definitions are not provided.

In my understanding, the unit of analysis (a case) is at the level at which you pitch the conclusions. The unit of observation is at the level at which you collect the data. So, the unit of observation and the unit of analysis can be the same but they need not be. In the context of quantitative research, units of observation could be students and units of analysis classes, if classes are compared. Or students can be both the units of observation and analysis if students are compared. Or students can be the units of analyses and grades the unit of observations if several observations (grades) are available per student. So it all depends on the design. Simply put, the unit of observation is the row in the data table but the unit of analysis can be at a higher level of aggregation.

In the context of qualitative research, it is more difficult to draw the difference between the two, also because the difference between analysis and observation is in general less clear-cut. In some sense, the same unit (case) traced over time provides distinct observations but I am not sure to what extent these snap-shots would be regarded as distinct ‘observations’ by qualitative researchers. 

But more importantly, I start to feel that the distinction between units of analysis and units of observation creates more confusion rather than more clarity. For the purposes of research design instruction, we would be better off if the term ‘case’ did not exist at all so we could simply speak about observations (single observation vs. single case study, observation selection vs. case selection, etc.) Of course, language policing never works so we seem to be stuck in an unfortunate but unavoidable ambiguity.

Overview of the process and design of public administration research in Prezi

Here is the result of my attempt to use Prezi during the last presentation for the class on Research Design in Public Administration. I tried to use Prezi’s functionality to provide in a novel form the same main lessons I have been emphasizing during the six weeks (yes, it is a short course). Some of the staff is obviously an over-simplification but the purpose is to focus on the big picture and draw the various threads of the course together.

Prezi seems fun but I have two small complaints: (1) the handheld device I use to change powerpoint slides from a distance doesn’t work with Prezi, and (2) I can’t find a way to make staff (dis)appear ala PowerPoint without zooming in and out .

Social science in the courtroom

Everyone who is interested in the sociology of science, causal inferences from observational data, employment gender discrimination, judicial sagas, or academic spats should read the latest issue of Sociological Methods & Research. The whole issue is devoted to the Wal-Mart Stores,Inc. v. Dukes et al. case – “the largest class-action employment discrimination suit in history”, with a focus on the uses of social science evidence in the courtroom. 

The focal point of contestation is the report of Dr. Bielby – an expert for the plaintiff. In a nutshell, the report says that the gender bias in promotion decisions at Wal-Mart can be attributed to the lack of efforts to create a strong corporate culture and limit the discretion managers have in promotion decisions, which in turn allows for biased decisions. The evidence is mostly 1) a literature review that supports the causal links between corporate policies and corporate culture, corporate culture and individual behavior, discretion and biased individual behavior, and corporate policies and outcomes, and 2) description of the corporate policies and culture at Wal-Mart which points to a relatively weak policy towards gender discrimination and considerable discretion for managers in promotion decisions. Dr. Bielby describes the method as follows: “…look at distinctive features of the firm’s policies and practices and … evaluate them against what social scientific research shows to be factors that create and sustain bias and those that minimize bias” [the method is designated as “social framework analysis”].

What gives the case broader significance (apart from the fact that it directly concerns between half a million and a million and a half current and former employees at Wal-Mart), is the letter [amicus brief] the American Sociological Association (ASA) decided to send in support of Dr. Bielby’s report. In the letter, ASA states that “the methods Dr. Bielby used are those social scientists rely on in scientific research that is published in top-quality peer-reviewed journals” and that “well done case studies are methodologically valid”. However, the Supreme Court apparently begs to differ, and rejected the plaintiffs’ claim.

The current issue of Sociological Research & Methods has two articles which attack the decision of ASA to endorse Dr. Bielby’s methodology and two articles that support it. In my opinion, the former are right. Mitchell, Monahan, and Walker characterize Dr. Bielby’s approach as “subjective judgments about litigation materials collected and provided to the expert by the attorneys”, but even if that goes too far, Sørensen and Sharkey definitely have a point in writing that what Dr. Bielby does is engage in abductive reasoning – “generate a hypothesized explanation from an observed empirical phenomenon”. Hence, hardly a reliable way to make a valid inference about causes and effects. Employment discrimination might be consistent with high managerial discretion but is not necessarily caused by it.

What makes this academic exchange particularly juicy is the fact that most contributors (the editor of the journal included) have been opponents in the courtroom as well – well, not directly but as experts for the two sides in numerous employment discrimination suites. Which probably raises the stakes, I guess. Here is the editor describing the process of putting the special issue together:

“Managing” these interchanges has been far more difficult than I had thought. Even around very technical issues, scholars can get very heated. Part of the problem, I believe, is that the academy and, certainly, the social sciences, and most specifically sociology, do not have a well-articulated set of norms about how to engage in constructive scientific discourse. Too often I have seen the following:
1. Claims that a person holds a position or has said something when he or she did not, that is, “putting words in a person’s mouth.”
2. Misconstrual, intentionally or not, of the meaning of what a person has written.
3. Questioning the expertise, intelligence, motives, or morals of an author.
4. Obfuscation by bringing in irrelevant or tangential points or material.” (p.552-3)

Academic discourse at its best.