Skip to content

Tag: academic publishing

Proposal for A World Congress on Referencing Styles

I have been busy over the last few days correcting proofs for two forthcoming articles. One of the journals accepts neither footnotes nor endnotes so I had to find place in the text for the >20 footnotes I had. As usual, most of these footnotes result directly from the review process so getting rid of them is not an option even if many are of marginal significance. The second journal accepts only footnotes – no in-text referencing at all – so I had to rework all the referencing into footnotes. Both journals demanded that I provide missing places of publication for books and missing page numbers for articles. Ah, the joys of academic work! But seriously… How is it possible that a researcher working in the XXI century still has to spend his/her time changing commas into semicolons and abbreviating author names to conform to the style of a particular journal? I just don’t get it. I am all for referencing and beautifully-formatted bibliographies but can’t we all agree on one single style? Does it really matter if the years of a publication are put in brackets or not? Who cares if the first name of the author follows the family name or the other way round? Do we really need to know the place of publication of a book? Where do you actually look for this information? Is it Thousand Oaks, London, or New Delhi? All three appear on the back of a random SAGE book I picked from the shelf……

Review the reviews

Frank Häge alerts me to a new website which gives you the chance to review the reviews of your journal submissions: On this site academic social science researchers have the opportunity to comment on the reviews they have received, and the process of decision-making about reviews, affecting articles submitted for publication, book proposals, and funding applications. So far there seems to be only one submission (by the site’s author) but I can see the potential. The addition of a simple scoring system so that you can rate your experience with certain journals might work even better. The danger is of course that the website becomes just another channel for venting the frustration of rejected authors. In my opinion, making the reviews public (perhaps after the publication of the article) is the way to go in order to increase the accountability of the review system.

Writing with the rear-view mirror

Social science research is supposed to work like this: 1) You want to explain a certain case or a class of phenomena; 2) You develop a theory and derive a set of hypotheses; 3) You test the hypotheses with data; 4) You conclude about the plausibility of the theory; 5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that mirrors the steps above. But in practice, social science research often works like this: 1) You want to explain a certain case or a class of phenomena; 2) You test a number hypotheses with data; 3) You pick the hypotheses that matched the data best and combine them in a theory; 4) You conclude that this theory is plausible and relevant; 5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that does not reflect the steps above. In short, an inductive quest for a plausible explanation is masked and reported as deductive theory-testing. This fallacy is both well-known and rather common (at least in the fields of political science and public administration). And, in my experience, it turns out to be tacitly supported by the policies of some journals and reviewers. For one of my previous research projects, I studied the relationship between public support and policy output in the EU. Since the state of the economy can influence both, I included levels of unemployment as a potential omitted variable in the empirical analysis. It turned out that lagged unemployment is positively related to the volume of policy output. In the paper, I mentioned this result in passing…

Academic fraud reaching new heights

Academic  fraud is reaching new heights lows. Dutch social psychologist Diederik Stapel (Tilburg University)  is the culprit this time. A commission looking into the issue came up with a report [in Dutch] on Monday saying that “the extent of fraud is very significant” (p.5). Stapel fabricated data for at least 30 papers published over a period of at least nine years (the investigation is still ongoing, the number can rise up to 150). Entire datasets supporting his hypotheses were made up from thin air. He also frequently gave fabricated data to colleagues and PhD students to analyze and co-author papers together. Diederik Stapel is was an eminent and ‘charismatic’ scholar whose research made global news on more than one occasion. He has been awarded a Pioneer grant by the Dutch National Science  Foundation. He is the man behind all these sexy made-up findings: Power increases hypocrisy Sexy doesn’t always sell Messy surroundings promote stereotyping and discrimination (published in Science!) Meat-eaters  are anti-social What a painfully ironic turn of events for Stapel who also  published a paper on the way scientists react to a plagiarism scandal. The whole affair first came to light this August when three young colleagues of Stapel suspected  that something isn’t quite right and informed the University. What is especially worrisome is that on a number of previous occasions people have implicated Stapel in wrongdoing but their signals had not been followed.  In hindsight, it is easy to see that the data is just too good to…

How to get more citations: red hot new evidence?

Wanna get more  citations to your papers? Start with the title. No colons, no question marks [evidence here  (gated); don’t look here]. More  acronyms [link]. And don’t even think about humorous and  amusing phrases [link]. Didn’t help? Don’t despair: “no more than 20% of citations of prominent papers involve the citer actually reading the papers in question” [link]

The present and the future of academic publishing

Academic publishing remains one of the most mysterious industries to me even after being caught in its web for a while. I have found no better presentation of the idiocy of the whole system than this video: more here Unfortunately, recent development (at least in social science journals) do not make me very hopeful about the future. Economic journal are abandoning double-blind review (see for example here) and Political Analysis, which prides itself to be the number one political science journal, recently announced that it will do the same (there does not seem to be an official announcement yet on the site of the journal). According to the new policy, the identity of authors would be revealed to the reviewers (who remain anonymous). The main argument for doing so is that in many cases the reviewers can guess the authors anyways. It is puzzling that economists and analytical political scientists of all people would fall for this argument – even if many reviewers can guess/google the identity of the authors, double-blind review is still a Pareto improvement over single-blind review: while it may not work in all cases, it doesn’t hurt in any. I would rather encourage more accountability on the side of the reviewers. Anonymous or not, manuscript reviews should be public documents. Why not attach them to the digital copies of the articles when published (or even better, when rejected)? I can see no harm in making the reviews publicly available by default.  Instead, after serving as a reviewer…