Explanation and the quest for ‘significant’ relationships. Part II

In Part I I argue that the search and discovery of statistically significant relationships does not amount to explanation and is often misplaced in the social sciences because the variables which are purported to have effects on the outcome cannot be manipulated.

Just to make sure that my message is not misinterpreted – I am not arguing for a fixation on maximizing R-squared and other measures of model fit in statistical work, instead of the current focus on the size and significance of individual coefficients. R-squared has been rightly criticized as a standard of how good a model is** (see for example here). But I am not aware of any other measure or standard that can convincingly compare the explanatory potential of different models in different contexts. Predictive success might be one way to go, but prediction is altogether something else than explanation.

I don’t expect much to change in the future with regard to the problem I outlined. In practice, all one could hope for is some clarity on the part of the researchers whether their objective is to explain (account for) or find significant effects. The standards for evaluating progress towards the former objective (model fit, predictive success, ‘coverage’ in the QCA sense) should be different than the standards for the latter (statistical & practical significance and the practical possibility to manipulate the exogenous variables).

Take the so-called garbage-can regressions, for example. These are models with tens of variables all of which are interpreted causally if they reach the magic 5% significance level. The futility of this approach is matched only by its popularity in political science and public administration research. If the research objective is to explore a causal relationship, one better focus on that variable and  include covariates only if it is suspected that they are correlated with the outcome and with the main independent variable of interest. Including everything else that happens to be within easy reach not only leads to inefficiency in the estimation. One should refrain from  interpreting causally the significance of these covariates altogether. On the other hand, if the objective is to comprehensively explain (account for) a certain phenomenon, then including as many variables as possible might be warranted but then the significance of individual variables is of little interest.

The goal of research is important when choosing the research design and the analytic approach. Different standards apply to explanation, the discovery of causal effects, and prediction.

**Just one small example from my current work – a model with one dependent and one exogenous time-series variables in levels with a lagged dependent variable included on the right-hand side of the equation produces an R-squared of 0.93. The same model in first differences has an R-squared of 0.03 while the regression coefficient of the exogenous variable remains significant in both models. So we can ‘explain’ 90% of the variation in the first case by reference to the past values of the outcome. Does this amount to an explanation in any meaningful sense? I guess that depends on the context. Does it provide any leverage to the researcher to manipulate the outcome? Not at all.

Explanation and the quest for ‘significant’ relationships. Part I

The ultimate goal of social science is causal explanation*. The actual goal of most academic research is to discover significant relationships between variables. The two goals are supposed to be strongly related – by discovering (the) significant effects of exogenous (independent) variables, one accounts for the outcome of interest. In fact, the working assumption of the empiricist paradigm of social science research is that the two goals are essentially the same – explanation is the sum of the significant effects that we have discovered. Just look at what all the academic articles with ‘explanation’, ‘determinants’, and ’causes’ in their titles do – they report significant effects, or associations, between variables.

The problem is that explanation and collecting significant associations are not the same. Of course they are not. The point is obvious to all uninitiated into the quantitative empiricist tradition of doing research, but seems to be lost to many of its practitioners. We could have discovered a significant determinant of X, and still be miles (or even light-years) away from a convincing explanation of why and when X occurs. This is not because of the difficulties of causal identification – we could have satisfied all conditions for causal inference from observational data, but the problem still stays. And it would not go away after we pay attention (as we should) to the fact that statistical significance is not the same as practical significance. Even the discovery of convincingly-identified causal effects, large enough to be of practical rather than only statistical significance, does not amount to explanation. A successful explanation needs to account for the variation in X, and causal associations need not to – they might be significant but not even make a visible dent in the unexplained variation in X. The difference I am talking about is partly akin to the difference between looking at the significance of individual regression coefficients and looking at the model fit as a whole (more on that will follow in Part II). The current standards of social science research tend to emphasize the former rather than the later which allows for significant relationships to be sold as explanations.

The objection can be made that the discovery of causal effects is all we should aim for, and all we could hope for. Even if a causal relationship doesn’t account for large amounts of variation in the outcome of interest, it still makes a difference.  After all, this is the approach taken in epidemiology, agricultural sciences and other fields (like beer production) where the statistical research paradigm has its origins. A pill might not treat all headaches but if it has a positive and statistically-significant effect, it will still help millions. But here is the trick – the quest for statistically significant relationships in epidemiology, agriculture, etc. is valuable because all these effects can be considered as interventions – the researchers have control over the formula of the pill, or the amount of pesticide, or the type of hops. In contrast, social science researchers too often seek and discover significant relationships between an outcome and variables that couldn’t even remotely be considered as interventions. So we end up with a pile of significant relationships which do not account for enough variation to count as a proper explanation and they have no value as interventions as their manipulation is beyond our reach. To sum up, observational social science has borrowed an approach to causality which makes sense for experimental research, and applied its standards (namely, statistical significance) to a context where the discovery of significant relationships is less valuable because the ‘treatments’ cannot be manipulated. Meanwhile, what should really count – explaining when, how and why a phenomenon happens, is relegated to the background in the false belief that somehow the quest for significant relationships is a substitute. It is like trying to discover the fundamental function of the lungs with epidemiological methods, and claiming success when you prove that cold air reduces significantly lung capacity. While the inference might still be valuable, it is no substitue for the original goal.

In Part II, I will discuss what needs to be changed, and what can be changed in the current practice of empirical social science research to address the problem outlined above.

*In my understanding, all explanation is causal. Hence, ‘causal explanation’ is tautology. Hence, I am gonna drop the ‘causal’ part for the rest of the text.

Google tries to find the funniest videos

Following my recent post on the project which tries to explain why some video clips go viral, here is a report on Google’s efforts to find the funniest videos:

You’d think the reasons for something being funny were beyond the reach of science – but Google’s brain-box researchers have managed to come up with a formula for working out which YouTube video clips are the funniest.

The Google researcher behind the project is quoted saying:

‘If a user uses an “loooooool” vs an “loool”, does it mean they were more amused? We designed features to quantify the degree of emphasis on words associated with amusement in viewer comments.’

Other factors taken into account are tags, descriptions, and ‘whether audible laughter can be heard in the background‘. Ultimately, the algorithm gives a ranking of the funniest videos  (with No No No No Cat on top, since you asked).

Now I usually have high respect for all things Google, but this ‘research’ at first appeared to be a total piece of junk. Of course, it turned out that it is just the way it is reported by the Daily Mail (cited above), New Scientist and countless other more or less reputable outlets.

Google’s new algorithm does not provide a normative ranking of the funniest videos ever based on some objective criteria; it is a predictive score about the video’s comedic potential. Google trained the algorithm on a bunch of videos (it’s unclear from the original source what the external ‘fun’ measure used for the training part was) in order to inductively extract features  associated with the video being funny. Based on these features, the program can then score any possible video. But these scores are not normative measures, they are predictions. So No No No No Cat is not the funniest video ever [well, it might be, it’s pretty hilarious actually], it is Google’s safest bet that the video would be considered funny.

The story is worth mentioning not only because it exposes yet another case of gross misinterpretation of a scientific project in the news, but because it nicely illustrates the differences between measurement, prediction, and explanation. The newspapers have taken Google’s project to be an exercise in measurement. As explained above, the goal is actually predictive in nature. But even if the algorithm has 100% success rate in identifying potentially funny videos, that would still not count as an explanation of what makes a video funny. Just think about it – would a boring video become funny if we just put funny tags, background laughter, and plenty of  loools in the comments? Not really. In that respect Brent Coker’s approach, which I mentioned in a previous post, has real explanatory potential (although I doubt whether it has any explanatory power).

So, no need to panic, the formula for something being funny is as distant as ever.

P.S. In an ironic turn of events, now that  No No No No Cat has gone viral, Google would never know whether the algorithm was very good, or just everyone wanted to see the video Google declared the funnies ever. Ah, the joys of social science research!

Is unit homogeneity a sufficient assumption for causal inference?

Is unit homogeneity a sufficient condition (assumption) for causal inference from observational data?

Re-reading King, Keohane and Verba’s bible on research design [lovingly known to all exposed as KKV] I think they regard unit homogeneity and conditional independence as alternative assumptions for causal inference. For example: “we provide an overview here of what is required in terms of the two possible assumptions that enable us to get around the fundamental problem [of causal inference]” (p.91, emphasis mine). However, I don’t see how unit homogeneity on its own can rule out endogeneity (establish the direction of causality). In my understanding, endogeneity is automatically ruled out with conditional independence, but not with unit homogeneity (“Two units are homogeneous when the expected values of the dependent variables from each unit are the same when our explanatory variables takes on a particular value” [p.91]).

Going back to Holland’s seminal article which provides the basis of KKV’s approach, we can confirm that unit homogeneity is listed as a sufficient condition for inference (p.948). But Holland divides variables into pre-exposure and post-exposure before he even gets to discuss any of the additional assumptions, so reverse causality is ruled out altogether. Hence, in Holland’s context unit homogeneity can indeed be regarded as sufficient, but in my opinion in KKV’s context unit homogeneity needs to be coupled with some condition (temporal precedence for example) to ascertain the causal direction when making inferences from data.

The point is minor but can create confusion when presenting unit homogeneity and conditional independence side by side as alternative assumptions for inference.

Inspiring scientific concepts

EDGE asks 159 selected intellectuals What scientific concept would improve everybody’s cognitive toolkit?

You are welcome to read the individual contributions which range from a paragraph to a short essay here. Many of the entries are truly inspiring but I see little synergy of bringing 159 of them together. Like in a group photo of beauty pageant contenders, the total appeal of the group photo is less than sum of the individual attractiveness of its subjects.

But to my point: It is remarkable that so many of the answers (on my count, in excess of 30) deal, more or less directly, with causal inference. What is even more remarkable is that most of the concepts and ideas about causal inference mentioned by the worlds’ intellectual jet-set (no offense to those left out) are anything but new. Many of the ideas can be traced back to Popper’s The Logic of Scientific Discovery (1934) and Ronald Fisher’s The Design of Experiments (1935). So what is most remarkable of all is how long it takes for these ideas to sink-in and diffuse in society.

Several posts focus on the Popperian requirement for falsifiability (Howard Gardner, Tania Lombrozo) and skeptical empiricism more generally (Gerald Holton). The scientific method is further evoked by Richard Dawkins on the double-blind control experiment (see also Roger Schank), Brian Knutson on replicability, and Kevin Kelly the virtues of negative results. Mark Henderson advocates the use of the scientific method outside science (e.g. policy) – a plea that strikes a chord with this blog.

A significant sample of contributions relate to probability (Seth Lloyd, John Allen Paulos, Charles Seife), and the difficulties humans have in understanding risk, uncertainty and probabilities (Antony Garrett, Gerd Gigerenzer, Lawrence M. Krauss, Carlo Rovelli, Keith Devlin, Mahzarin Banaji, David Pizarro). W. Daniel Hillis and Kevin Devlin mention possibility spaces and base rates respectively as concepts that might help.

Several authors warn of the dangers of anecdotal data (Susan Fiske, Robert Sapolsky) and Christine Finn insists that the absence of evidence is not evidence of absence. Susan Blackmore reminds that correlation is not a cause and Diane Halpern critiques the cult of statistical significance.  Beatrice Golomb discusses misinterpretations of the placebo effect.

You do want to check out some innovative approaches to causality – causation as an information flow (David Dalrymple), nexus causality (John Tooby) and Rebecca Newberger Goldstein’s  ‘best explanation‘ that go beyond the “monocausalitis” disease identified by Ernst Poppel (related argument by Nigel Goldenfeld).

Some highlights from the remaining posts:

– Richard Thaler compares the economic concept of utility to  aether.

– Eric R. Weinstein on kayfabe (!) – the fabricated competition in professional wrestling and… the study of economics

– Fiery Cushman on confabulation (“Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties”)

– Joshua D. Greene on  supervenience (“The Set A properties supervene on the Set B properties if and only if no two things can differ in their A properties without also differing in their B properties””)

– Stephen M. Kosslyn  on constraint satisfaction as a decision mechanism

And Andrian Kreye mentions  free jazz: