Google tries to find the funniest videos

Following my recent post on the project which tries to explain why some video clips go viral, here is a report on Google’s efforts to find the funniest videos:

You’d think the reasons for something being funny were beyond the reach of science – but Google’s brain-box researchers have managed to come up with a formula for working out which YouTube video clips are the funniest.

The Google researcher behind the project is quoted saying:

‘If a user uses an “loooooool” vs an “loool”, does it mean they were more amused? We designed features to quantify the degree of emphasis on words associated with amusement in viewer comments.’

Other factors taken into account are tags, descriptions, and ‘whether audible laughter can be heard in the background‘. Ultimately, the algorithm gives a ranking of the funniest videos  (with No No No No Cat on top, since you asked).

Now I usually have high respect for all things Google, but this ‘research’ at first appeared to be a total piece of junk. Of course, it turned out that it is just the way it is reported by the Daily Mail (cited above), New Scientist and countless other more or less reputable outlets.

Google’s new algorithm does not provide a normative ranking of the funniest videos ever based on some objective criteria; it is a predictive score about the video’s comedic potential. Google trained the algorithm on a bunch of videos (it’s unclear from the original source what the external ‘fun’ measure used for the training part was) in order to inductively extract features  associated with the video being funny. Based on these features, the program can then score any possible video. But these scores are not normative measures, they are predictions. So No No No No Cat is not the funniest video ever [well, it might be, it’s pretty hilarious actually], it is Google’s safest bet that the video would be considered funny.

The story is worth mentioning not only because it exposes yet another case of gross misinterpretation of a scientific project in the news, but because it nicely illustrates the differences between measurement, prediction, and explanation. The newspapers have taken Google’s project to be an exercise in measurement. As explained above, the goal is actually predictive in nature. But even if the algorithm has 100% success rate in identifying potentially funny videos, that would still not count as an explanation of what makes a video funny. Just think about it – would a boring video become funny if we just put funny tags, background laughter, and plenty of  loools in the comments? Not really. In that respect Brent Coker’s approach, which I mentioned in a previous post, has real explanatory potential (although I doubt whether it has any explanatory power).

So, no need to panic, the formula for something being funny is as distant as ever.

P.S. In an ironic turn of events, now that  No No No No Cat has gone viral, Google would never know whether the algorithm was very good, or just everyone wanted to see the video Google declared the funnies ever. Ah, the joys of social science research!

Is unit homogeneity a sufficient assumption for causal inference?

Is unit homogeneity a sufficient condition (assumption) for causal inference from observational data?

Re-reading King, Keohane and Verba’s bible on research design [lovingly known to all exposed as KKV] I think they regard unit homogeneity and conditional independence as alternative assumptions for causal inference. For example: “we provide an overview here of what is required in terms of the two possible assumptions that enable us to get around the fundamental problem [of causal inference]” (p.91, emphasis mine). However, I don’t see how unit homogeneity on its own can rule out endogeneity (establish the direction of causality). In my understanding, endogeneity is automatically ruled out with conditional independence, but not with unit homogeneity (“Two units are homogeneous when the expected values of the dependent variables from each unit are the same when our explanatory variables takes on a particular value” [p.91]).

Going back to Holland’s seminal article which provides the basis of KKV’s approach, we can confirm that unit homogeneity is listed as a sufficient condition for inference (p.948). But Holland divides variables into pre-exposure and post-exposure before he even gets to discuss any of the additional assumptions, so reverse causality is ruled out altogether. Hence, in Holland’s context unit homogeneity can indeed be regarded as sufficient, but in my opinion in KKV’s context unit homogeneity needs to be coupled with some condition (temporal precedence for example) to ascertain the causal direction when making inferences from data.

The point is minor but can create confusion when presenting unit homogeneity and conditional independence side by side as alternative assumptions for inference.

Inspiring scientific concepts

EDGE asks 159 selected intellectuals What scientific concept would improve everybody’s cognitive toolkit?

You are welcome to read the individual contributions which range from a paragraph to a short essay here. Many of the entries are truly inspiring but I see little synergy of bringing 159 of them together. Like in a group photo of beauty pageant contenders, the total appeal of the group photo is less than sum of the individual attractiveness of its subjects.

But to my point: It is remarkable that so many of the answers (on my count, in excess of 30) deal, more or less directly, with causal inference. What is even more remarkable is that most of the concepts and ideas about causal inference mentioned by the worlds’ intellectual jet-set (no offense to those left out) are anything but new. Many of the ideas can be traced back to Popper’s The Logic of Scientific Discovery (1934) and Ronald Fisher’s The Design of Experiments (1935). So what is most remarkable of all is how long it takes for these ideas to sink-in and diffuse in society.

Several posts focus on the Popperian requirement for falsifiability (Howard Gardner, Tania Lombrozo) and skeptical empiricism more generally (Gerald Holton). The scientific method is further evoked by Richard Dawkins on the double-blind control experiment (see also Roger Schank), Brian Knutson on replicability, and Kevin Kelly the virtues of negative results. Mark Henderson advocates the use of the scientific method outside science (e.g. policy) – a plea that strikes a chord with this blog.

A significant sample of contributions relate to probability (Seth Lloyd, John Allen Paulos, Charles Seife), and the difficulties humans have in understanding risk, uncertainty and probabilities (Antony Garrett, Gerd Gigerenzer, Lawrence M. Krauss, Carlo Rovelli, Keith Devlin, Mahzarin Banaji, David Pizarro). W. Daniel Hillis and Kevin Devlin mention possibility spaces and base rates respectively as concepts that might help.

Several authors warn of the dangers of anecdotal data (Susan Fiske, Robert Sapolsky) and Christine Finn insists that the absence of evidence is not evidence of absence. Susan Blackmore reminds that correlation is not a cause and Diane Halpern critiques the cult of statistical significance.  Beatrice Golomb discusses misinterpretations of the placebo effect.

You do want to check out some innovative approaches to causality – causation as an information flow (David Dalrymple), nexus causality (John Tooby) and Rebecca Newberger Goldstein’s  ‘best explanation‘ that go beyond the “monocausalitis” disease identified by Ernst Poppel (related argument by Nigel Goldenfeld).

Some highlights from the remaining posts:

– Richard Thaler compares the economic concept of utility to  aether.

– Eric R. Weinstein on kayfabe (!) – the fabricated competition in professional wrestling and… the study of economics

– Fiery Cushman on confabulation (“Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties”)

– Joshua D. Greene on  supervenience (“The Set A properties supervene on the Set B properties if and only if no two things can differ in their A properties without also differing in their B properties””)

– Stephen M. Kosslyn  on constraint satisfaction as a decision mechanism

And Andrian Kreye mentions  free jazz: