Academic fraud reaching new heights

Academic  fraud is reaching new heights lows. Dutch social psychologist Diederik Stapel (Tilburg University)  is the culprit this time.

A commission looking into the issue came up with a report [in Dutch] on Monday saying that “the extent of fraud is very significant” (p.5). Stapel fabricated data for at least 30 papers published over a period of at least nine years (the investigation is still ongoing, the number can rise up to 150). Entire datasets supporting his hypotheses were made up from thin air. He also frequently gave fabricated data to colleagues and PhD students to analyze and co-author papers together.

Diederik Stapel is was an eminent and ‘charismatic’ scholar whose research made global news on more than one occasion. He has been awarded a Pioneer grant by the Dutch National Science  Foundation. He is the man behind all these sexy made-up findings:

What a painfully ironic turn of events for Stapel who also  published a paper on the way scientists react to a plagiarism scandal.

The whole affair first came to light this August when three young colleagues of Stapel suspected  that something isn’t quite right and informed the University. What is especially worrisome is that on a number of previous occasions people have implicated Stapel in wrongdoing but their signals had not been followed.  In hindsight, it is easy to see that the data is just too good to be true – always yielding incredibly big effects supporting the hypotheses, no missing data and outliers, etc. He didn’t even show any finesse or statistical sophistication in the fabrication. Still, co-authors, reviewers, and journal editors failed to spot the fraud for so many years and so many papers.

Stapel responds that the mistakes he has done are “not because of self-interest“. Interesting… A longer statement is expected on Monday. Tilburg University has already suspended Stapel  and  will decide  what other measures  to take once all investigations are over.

There are so many things going wrong on so many different levels here but I will only comment on the role of the academic  journals in this affair. How is it possible that all the reviewers missed the clues that something is fishy? A close reading should have revealed a pattern of improbably successful results. But are suspicions that results are too good to be true enough to reject an article? Probably not. But suspicions are enough to request more details about how the data was gathered. And, at the very least, the reviewers could have alerted the editors. It is probably too far-fetched to expect the data to be provided with the submission for review but a close inspection of summary statistics, cross-correlations and the like could have detected the fabrication.

But the bigger problem is the lack of incentives for replication. A pattern of strong results that cannot be replicated would have uncovered  the fraud much quicker but, of course, nobody (or very few) bothered to replicate. And why would they?  In a recent case, a leading psychology journal which initially published some outlandish claims for the effects of precognition refused to publish unsuccessful attempts to repeat the results with the argument that it doesn’t publish replications!  So Stapel might blame the ‘publish or perish culture’ for his misdemeanors but journal policies have to share a part of the blame.

On a side note: psychology and social psychology are especially prone to this type of data fabrication. Historians work with document sources that can easily be checked (e.g. when a team of Dutch scholars exposed the numerous problems with the sources and the evidence in Andrew Moravcsik’s widely-acclaimed The Choice for Europe ). In political science  and public administration data is often derived from the analysis of documents and observation of institutions, and, while mistakes can happen, they are relatively easy to spot. And often data collection requires a collective effort involving a number of scholars (e.g. in estimating party positions with manifestos or conducting representative surveys of political attitudes) which makes fraud on such a scale is less likely. I hope not to be proven wrong too soon.

For more info on the Stapel affair: an article in English is available here,  and  in Dutch here. Hat tips to Patrick and Toon for providing info and links.

The present and the future of academic publishing

Academic publishing remains one of the most mysterious industries to me even after being caught in its web for a while. I have found no better presentation of the idiocy of the whole system than this video:


more here

Unfortunately, recent development (at least in social science journals) do not make me very hopeful about the future. Economic journal are abandoning double-blind review (see for example here) and Political Analysis, which prides itself to be the number one political science journal, recently announced that it will do the same (there does not seem to be an official announcement yet on the site of the journal). According to the new policy, the identity of authors would be revealed to the reviewers (who remain anonymous). The main argument for doing so is that in many cases the reviewers can guess the authors anyways. It is puzzling that economists and analytical political scientists of all people would fall for this argument – even if many reviewers can guess/google the identity of the authors, double-blind review is still a Pareto improvement over single-blind review: while it may not work in all cases, it doesn’t hurt in any.

I would rather encourage more accountability on the side of the reviewers. Anonymous or not, manuscript reviews should be public documents. Why not attach them to the digital copies of the articles when published (or even better, when rejected)? I can see no harm in making the reviews publicly available by default.  Instead, after serving as a reviewer for a paper submitted to the Journal of Common Market Studies I was denied a request to see the other reviews after the editorial decision was made. I can see how concealment can be beneficial for the discretion of the editors, but I fail to see how it improves genuine academic discussion and the advancement of knowledge which, to my mind, is the objective of the  entire system of academic publishing.

To end on a bright note, last month Princeton University decided to ban researchers from giving the copyright of scholarly articles to journal publishers. Hopefully, that would not remain an isolated incident.